ALL ARTICLES FOR technology

Some say artificial intelligence (AI) will be the next big thing after the internet: a tool enabling new industries and improving the lives of ordinary people. Others think AI is the greatest threat to society as we know it. This article will try to explain why both parties are correct.

 

 

As humans, we have a visual system that allows us to see (extract and understand) shapes, colours and contours. So why do we see every image as a different image? How do we know, for example, that a box in an image is, in reality, a box? And how do we know what a plane is or what a bird is?

 

 

One very important part of working as a Data Scientist is often overlooked. I'm referring to the part which involves visualising data and results. You probably don't think that this is an exciting task, but let me explain why it is so important. Usually, Data Scientists are fascinated by making complicated models and trying out new architectures. However, as the job title suggests, the main part of a Data Scientist's work is to do with data. For example, as a Data Scientist, at the very start of every project, you begin by familiarizing yourself with the datasets on which you will be working. Of course, the easiest way to do this is just to open the files and look at them. In the case of images, it is quite simple to understand the content of the image without using special tools or packages. On the other hand, visualisation is often the best option not only when it comes to gaining your first insights into data, but also when it comes to searching for and identifying the first set of patterns, trends or outliers.

Our brains are fascinating “machines” which can take in a great deal of information from just one picture. It is not without reason that we say that a picture is worth a thousand words. So that is why we need to create high quality visualisations. By high quality I'm not really referring to the choice of colours, but more to how the information is presented. Unfortunately, nearly every day, we see many graphs and plots that have been poorly designed. The problem is not that they look poor from an aesthetic point of view, but that the way in which information is presented is misleading or deceptive (sometimes even on purpose). For example, Reuters published the plot below, and I consider it to be one of the best examples of a misleading plot.

 

 

Above, we see a line plot with marked points and, at first sight, the plot looks correct. One of the points in the timeline has a description ("2005 Florida enacted …"), which seems to show that a change was made in the Law and that, as a result, there was a drop in the number of murders. But wait … let’s look carefully at the scale on the y-axis. From what we learned at school (that 0 is at the bottom left hand corner and that larger values are placed progressively higher up the axis), we expect every graph to follow the same rules. However, in this case, the author changed where 0 starts on the axis (to the top rather than at the bottom), which means that it is not immediately obvious how to interpret the plot. The simplest way to fix this is to turn the plot on the x-axis. And after that, you will see that the red colour is no longer the background of the plot, but is really the actual area under the plot. As a result, you will come to conclusions based on the corrected plot that are quite different to any conclusions which you can draw from the original plot. 

 

One type of plot is doomed to failure. You might be surprised to learn that I'm talking about the pie chart, which you know so well from many presentations and the media. Unfortunately, this chart has a fundamental problem with how it is defined. This is because human brains are not good at analysing angles. In a pie chart, we compare angles to say which group is bigger or smaller compared to the 100% data total. Well, we can say that element A has a bigger or smaller representation but, if the question is in the form of “how many times is part A bigger than part B”, then it is impossible to answer that question without listing the actual percentages for A and B. Another problem is that it seems to be impossible to compare two pie charts by just looking at the plots. Here is an example: 

 

Visualisation blog

If we want to know the difference between any part on the left plot to the same part on the right plot, then you can only take a guess because you do not know what the numbers are. I hope that you now understand why scientists of all types hate pie charts. John Tukey, who was a statistician, said "There is no data that can be displayed in a pie chart, that cannot be displayed BETTER in some other type of chart". And a 3D pie chart is even worse than a standard pie chart! Maybe the 3D version looks fancier to some people, but such a chart only makes it more difficult for readers to really understand the structure of the data.

 

The choice of colour palette is another challenge when it comes to data visualisation. This especially applies to a choropleth map, in which different elements are assigned a different colour. For example, you probably remember geography lessons in which you looked at map which showed mountains, lowlands, seas, etc. in different colours. This approach can be helpful, especially when we look at a map and we want to quickly find, for example, mountain ranges.

 

 

On the other hand, the example above shows how a colour palette can cause confusion in our understanding of data. When you look at the scale on the left side, we see that the minimum value is represented by purple and the maximum value by red. However, if we want to find out which colour represents the mean value in this range, then we cannot figure out the answer to that question. Next, note that some parts of the map are coloured white. Usually white means that a value is 0 or that there is no information for a given area. However that is not the case in this plot, where white colour is assigned to values around 200 metres, and 200 meters is not the mean value in this range. The final problem here is that, if you look at values on the scale, you see that intervals for each colour are about every 20, but the last interval is about 3000. So that’s a big surprise. 

 

Did you know that, if you want to cheat or hide some “inconvenient” data, bar plots are the best choice for that? Of course, I'm not suggesting that you would want to mislead anyone, but only helping you not to be misled by others. One method of cheating is to create the illusion of a “big difference” by using a scale which does not begin from 0. When we don’t start from zero, we only see the tip of the iceberg with a set starting point which was convenient for the author. A good example of that is the plot below. At first sight, it looks as if women in Latvia are several times taller than their counterparts in India. But this is not true! The real difference is about 5 inches, which is less than 10% of the average height across the data set.  

 

 

From the examples given above, you can see it is crucial to present data and information in the best way so that it is possible to understand them clearly and correctly. So, when you are at the stage of exploring a new set of data, I suggest that you try out various ways of presenting the data, and then see which form of presentation works best in achieving a particular outcome. At this stage of a project, ensure that you don't make any assumptions about that data as this can lead to misunderstandings and then to taking wrong decisions, for example, on the question of choosing the right model.

 

Visualisation is also often used at the very end of a project, at that time when you have your results and you want to present them to either your  teammates or your clients. It's important that you don't make any mistakes at this stage when presenting data. If you make a mistake, then any stakeholders may themselves misunderstand the results, which can cost time and money, and can lead to them making poor decisions further down the line. Don’t ever underestimate the full power of visualisations As humans, we prefer to just look at pictures rather than read through all the data. However, that means that presenting data in this summary way carries with a important obligations. In conclusion, think carefully about how you want to show to readers any plot, graph or any kind of graphical representation of data.

 

 

 

 

 


jakies roboty2

 

INTRODUCTION

Artificial intelligence is one of the fastest evolving science domains. It influences the route we take when we drive to work, how we use our phones and computers, what shopping we buy and what we will see in cinemas in the near future. Artificial intelligence is having a remarkable impact on the development of the film industry, which is an area that has been growing almost as fast as AI itself in recent years. As with AI, the development of computers and the increase in computing capabilities has resulted in taking the industry to a whole new level. We have now reached a level where tools using machine learning models are being used in the filmmaking process on a daily basis. This could herald the next revolution in the field. One thing that most movies today cannot do without are special effects. On the one hand, advanced special effects are expected by the viewer, but on the other hand, they often consume a large percentage of a film's budget. The question must therefore be asked Is it not possible, in this age of ubiquitous automation, to use artificial intelligence to create special effects more quickly and cheaply? Or could we completely replace CGI (computer-generated imagery) artists with AI systems? Let’s start from the beginning.

 

BRIEF HISTORY OF CGI

The history of CGI begins about 50 years ago, in the 1970s. Of course, this is not the beginning of special effects itself. Various tricks have been used since the 1920s  to achieve the desired illusion on the big screen. As special effects of that time were based mainly on costumes and systems often using advanced engineering they were called engineering effects. The most famous examples of such effects are Alien (1979) and The Thing (1982).

 

Alien obcy

The first movie to make use of CGI was Westworld (1973), followed by Star Wars: Episode IV (1977) and Tron (1982). Despite the simplicity of CGI effects used in these films, it was undoubtedly a breakthrough in the film industry. These movies showed a completely new path for filmmakers, which has been explored extensively ever since. The following years saw films such as Indiana Jones And The Last Crusade (1989), Terminator 2: Judgement Day (1991), Independence Day (1996), The Matrix (1999), The Lord of the Rings (2001) and King Kong (2005). Each of these films brought new, more advanced solutions which made special effects an increasingly important part of the film. In 2009 the premiere of Avatar took the importance of special effects to a whole new level. Special effects used in the film were the main marketing value for the whole piece. At this point, we have reached a time where a movie can actually be generated entirely through the use of CGI and is only distinguished from animation by the use of actors and possibly some residual choreography. A great example of this type of film is the Hobbit series which were shot almost entirely on a green screen.

 

 

Obraz5

 

HOW AI IS CHANGING FILMMAKING PROCESS?

But why have special effects progressed so quickly? Well, it’s not only them that have developed over the last 50 years. An area which has an incredible impact on the lives of everyone on our planet and which is also one of the fastest developing areas in the world today is artificial intelligence. At the moment we are witnessing a great interest in generative models which are capable of producing previously unseen original content. The potential of such models is enormous, and when used correctly, they can facilitate the work of both artists and computer graphic designers. It is AI that has made special effects such a high level today. We must therefore ask ourselves whether artificial intelligence is capable of reaching such a level to replace CGI artists and automatically generate special effects for films?

 

We have to start with the fact that no matter how impressive the results of generative models are at the moment, a human being does not really have a full influence on what the model generates. The model is not taught to interpret human ideas and is not able to understand what the director has in mind and propose some other solutions or interesting ideas. In order for a model to be able to fully replace a human being as a CGI artist, it would need to be able to interpret human ideas at a level of abstraction equal to a human being. A model would need to be able to understand not only the command given to it, but also the idea behind that command and the way of thinking of the human giving it. Such a level of interpretability of human ideas is still the domain of science fiction movies.

 

However, this doesn't mean that artificial intelligence is completely useless when it comes to generating special effects. What’s more, it can be said that in recent years, it is artificial intelligence and its development that has driven the development of software for film editing and the generation of special effects. This can be seen in the amount of functionality of these types of software which are based on machine learning models. From the technical side, however, machine learning models are advanced tools that require a trained specialist to operate. What is more, he or she does not even need to be aware that he or she is using artificial intelligence models. It is enough that he or she knows the function in the video editing software, understands the parameters of this function and is able to use it correctly. The fact that there is an artificial intelligence model running underneath does not really matter to the user, because it is the effect that counts. And those are good and fast.

 

In addition, the current level of sophistication of machine learning models in image processing allows a task to be completed with a single click, a task which several years ago may have taken several days or even required the use of some professional equipment. For the producer, of course, this means faster production, which translates into lower costs for both employees and the use of equipment. A good example of this is the Motion Capture system, which has been very popular among filmmakers for several years now. It allows human or animal movements to be read and then transferred to computer-generated characters so that their movements are more realistic. In February of this year, a paper was published by scientists from the ETH Zurich and the Max Planck Institute, which makes it possible to generate accurate 3D avatars from just a 2D film [1]. The results are astounding and there is no denying that there is huge potential in this type of model to replace the Motion Capture system. Considering the size and the cost of renting and operating a Motion Capture system and comparing it to the cost of recording an actor performing a specific sequence of movements, the savings for film producers could be enormous without loss of quality.

 

 

Obraz6

 

 

CONCLUSION

So the answer for the title question seems simple and hardly surprising: AI will not replace CGI artists, at least for now. However, there’s no doubt machine learning has an enormous impact on the filmmaking process making it more quickly and cheaper. The evolution of special effects and film editing software, as well as the example of Vid2Avatar cited above, show that just as the introduction of computers began a revolution in filmmaking, so the use of machine learning models begins a whole new era in the filmmaking process.

 

[1] https://moygcc.github.io/vid2avatar/?fbclid=IwAR2UcX2vcGD-wm3L6pFMX2dWQGPOJ_kvHu5TS0oY-znjwb8LmWawbIHm51Q 

 

 

How can we help you?

 

To find out more about Digica, or to discuss how we may be of service to you, please get in touch.

Contact us