So, I have reached my final blog of the CRE101 series. Firstly, thank you for taking the time to read each and every one. I hope you learned something new. For this final blog, I am going to talk about the future of creative technologies. I will be covering interactive performance, interaction in architecture and generative art.
Interactive Performance Nowadays, interactive performance usually involves individuals interacting in a real time environment, producing a unique audio and visual output. One popular program that allows for interactive performance is Cycling 74’s Max/MSP. Having some experience with Max myself, I am aware of the interesting visual and auditory outputs achievable using sensors and actuators. There is a unique package known as Vizzie within Max that contains a number of very interesting visual effects that can be controlled by sensors such as distance, light and pressure. For interactive performance, an element of projection mapping is involved, that is, the process of turning objects and surfaces into display surfaces for video projection.
As you can see from Figure 1, projection mapping creates this augmented reality for the individual and audience. Depending on the commands used, the visuals can be quite stunning. Moving forward into the future, I believe artists will move away from traditional art forms and move towards artistic display through augmented and virtual reality. As technology advances, the artistic potential will only increase.
Interaction in Architecture When it comes to interaction in architecture, Miguel Chevalier is perhaps one of the greatest artists of his era.
Figure 2 above shows a piece from Chevalier’s work. Here in Cambridge, a projection of the cosmos onto a Cathedral amazed visitors. During this particular exhibition, a relaxing audio track played in the background as visitors wandered around, looking into the cosmos. The combination of the visuals and audio created a very ambient setting for the audience. The key with this sort of interaction is that it relies on architectural features, thus expanding the range of opportunities when it comes to producing a unqiue audio/visual output. By using cameras and motion sensors, one can program the visuals to react to the users movements in reality, giving them a sense of control over the artistic output. Chevalier has also utilised the idea of the next topic, generative art. He has produced real time projections of sky charts onto buildings. Again, this produces an interesting visual for visitors.
Generative Art This is art which generates constantly thanks to mathematical algorithms that are established by a program. Generative art really encompasses the notion of interactive displays. In my view, generative art can lead to some really impressive outputs and some not so impressive ones! Depending on the algorithms used. I know for instance that one can establish an automated system that randomly generates an algorithm, which in turn randomly generates a visual output. I believe generative art appeals to a wider audience than traditional forms due to its unique method of creation and output. When combined with interactive performance, you can really achieve some very impressive outputs.
Future Looking ahead, I predict massive leaps in technology over the coming 10 years. I fear technology for artistic output will be neglected however I am confident there will be those dedicated to the cause, to continue to develop technologies like AR for audio and visual outputs that are interactive. I’ve already discussed in a previous blog about the future of technology, particularly around A.I. and the potential advantages and disadvantages of it. I would strongly encourage you to take a look at previous blogs.Once more, thank you for joining me on this short journey, I hope to continue long into the future.
The areas I will be covering in this include; Human Computer Interaction (HCI), User Experience (UX), Heuristics (Usability) and Inclusive Design.
HCI Human Computer Interaction (HCI) looks at the way in which computers interact with humans. As time has passed, the ways in which interaction can take place has changed significantly. For example, when the graphical user interface (GUI) became popular in the 1980’s it revolutionised the way humans could interact with a terminal or computer. That’s because before the GUI came about, our main method for interacting with the computer was via actual programming languages such as Java. For the non-expert users, this proved to be a real challenge. The GUI intorduced icons which could be selected with a mouse. It was simple and opened computers to more people. Nowadays, interaction with computers is even more simple. Voice commands and body motions can now be used to control ‘intelligent’ computer systems. If we even take a look at the voice technology for a moment, it is plain to see how voice technology has developed since 4th October 2011. Why that date? Well, on that day, Apple launched the iPhone 4S with a beta version of Siri, the voice commanded assistant who lives within your phone. The services Siri could fulfill in the beginning were primative to say the least; weather, time and the off joke! Apple only really began to focus on Siri’s intelligence from 2016. As of writing this, Siri can set alarms, send messages to people, take notes, set reminders and play music to name a few things.
Voice command software like Siri has revolutionised HCI. It has made life easier particularly for those who have certain disabilities. Looking forward to more recent days, virtual reality technology has broken into the common market and is being distributed to a wider audience through already popular channels such as Microsoft and Sony.
Headsets like those in Fig. 2 have been growing in popularity amongst gamers due to the immersive feel they provide. This is key HCI, the human has reached a point (or almost reached a point) where distinguishing between reality and the virtual world is impossible. In this instance, the user forgets as such that there are any computers involved. I think this the ultimate aim of HCI development. There is a question as to whether virtual reality is a good thing. It could (not saying it will) lead to greater social isolation and a lack of basic life skills. I do however believe that as HCI is researched and devices are developed accordingly, we will reach a stage where technology will be used for a greater number of tasks and will ultimately benefit humanity, provided we keep regulations in place (no terminator cases!).
UX User Experience design challenges a designer to examine who is actually using their app or device (the user), identify frustrations with the device or app, establish how such frustrations could be removed through testing solutions. Prototyping is a very important process in UX as any designer would tell you. That process of creating a solution, giving it to the user, receiving feedback and then creating another solution is important. It lets the end user feel involved in the creative process but it also gives the designer a chance to improve their own skills. Nowadays UX is taken into account so much so that the majority of new websites, apps and devices would have undergone months upon months of rigourous UX design.
Heuristics Another important aspect of design is whether or not a device is usable in accordance with a set of heuristics or standards.
The man in Figure 3 above is Jakob Nielsen. He set out ten heuristics he said should be followed when it came to evaluating the usability of a device. Without listing all of them, they were pretty important and made sense. For instance, instead of having good error messages, we should have a good system that prevents errors in the first place. Likewise, a system should communicate with the user in human language and use terminology humans can understand as opposed to computer language and references. Again, when you think about it, heuristics are a vital part of design. They can underpin what makes a system or computer work. I feel heuristics can be developed by listening to user feedback. For instance, Ben Shneiderman highlights the fact that users want to feel in control of the interface. They want to feel as if it is responding to them. They do not want repetitive data entries or any tedious unneccessary tasks.
Inclusive Design Designers nowadays are taking into account inclusivety more than ever. Inclusive design takes into account those with disabilities but does not seperate them from those without disabilities. In other words, devices or applications that are designed, can be used by anyone regardless of their situation. Whilst I agree with the idea of inclusive design, it is important that designers really try to remain as open to everybody as possible. For instance, someone who cannot speak or move their arms is left speechless with voice command technology. A prime example is the late Professor Stephen Hawking. His ALS disease left him paralysed, unable to speak or move except for a few small facial muscles. Technology was developed that allowed him to use his facial muscles to select words on a screen that would then be spoken out using a computerised assistant. That is a real life example of inclusive design, perhaps at its extreme.
In this most recent piece, I aim to review big data, its implications and benefits. I will also discuss the idea of the internet of things or IOT, hope you enjoy!
This is a topic which has engulfed the news media recently. Before I begin, I’ll start with the definition of big data; Big data is data sets that are so voluminous and complex that traditional data-processing application software are inadequate to deal with them. Essentially meaning one must employ a company of some description to handle and process the data.
One such company is Cambridge Analytica.
Cambridge Analytica and Facebook are in hot water over alleged reports that they harvested millions of users data (without their consent), psychologically profiled users and delivered pro-Trump material to their newsfeeds on the run up to the 2016 US Presidential Election. Both companies have denied any wrong doing, and until official investigations have concluded, it would be wrong to jump to any conclusions.
But, I can discuss the implications to humanity if this did in fact occur. The most worrying issue is that your personal data can be sold to large companies for use against you. In this case it was used to influence ones voting intention. I believe Cambridge Analytica’s misuse of over 50 million Facebook profiles highlights a severe privacy flaw that exists within Facebook’s infrastructure. Facebook is bound by its terms and conditions to protect user data (ironically, they are allowed to share it with third party groups) and they have failed. 50 million users should count themselves lucky that their data only went to a company like Cambridge Analytica… Imagine if a foreign power could harvest such a large volume of personal data (who’s to say they can’t already?).
Facebook should in my opinion, face consequences for their lapse in data protection. I am not in a position to advocate what those consequences should be however it would be wrong to allow such a global company go un-punished. I do have a slightly different take on Cambridge Analytica’s role in all of this. I have noticed that over the past few weeks as this story has developed, there has been a sort of false portrayal of Cambridge Analytica as some kind of dark dystopian force that manipulates elections globally. I also find it rather amusing that thousands of people believe this single whistle-blower. It’s quite a delusional position to take if you ask me.
I am also of the belief that many people particularly anti-Trump folk, are only ‘outraged’ because the data was used to influence an election whose winner was someone out of the norm. President Trump is not a traditional President by any means, his pro-life views (which are to be respected) anger many quite frankly un-democratic people. To me, the anti-Trump folk who are ‘outraged’ have side-lined the real issue in the Cambridge Analytica story, the severe privacy flaw within Facebook. If people directed their opinions towards Facebook and not simply attack the US President on Twitter, I think Facebook would be encouraged to take more of a stand.
Internet of Things The Internet of Things was a term first used by Kevin Ashton to describe a network of physical devices which are embedded with electronics to allow for communication with users and other devices/objects.
The physical devices in question have, in my view, revolutionised modern society in that, they are now extremely accessible and in some cases enhance the safety of users. The idea of accurate geolocation technology available to the public would have been laughed at twenty years ago, nowadays however some of the most advanced and accurate geolocation software is found on a mobile phone. While there are some instances that show geolocation technology to be a bad thing (Facebook privacy-burglaries), I firmly believe the benefits far outweigh the costs of this technology. The obvious example is the ability of the emergency services to pinpoint one’s location from their mobile phone. This in almost every case, saves a life. It is extraordinary to think that mobile phones have progressed so much from Fig. 3 to a powerful, mini-computer in your pocket.
It is important to remember that the Internet of Things covers a wide range of devices, not just mobile technology. Probably the most relevant one to discuss is the smart car, given the news recently of a number of automatic cars crashing, killing motorists and pedestrians. Excluding the idea of a driverless car for a moment, nearly every modern car produced within the last ten years relies on technology. (That technology has advanced significantly over those ten years). Some of this technology includes, but is not limited to; GPS/Sat Nav, parking sensors/rear view camera and light sensors. The light sensors in particular have really only been developed over the past three years or so and they are quite extraordinary. Some newly produced cars will now switch on the headlights automatically once the light level outside has dropped below a certain level. For me, these technological advances benefit the driver experience and arguably improve overall safety on the road.
I do however have an issue with this idea of driverless vehicles. Setting aside recent news stories, just the concept of being in a car without having immediate control of the engine is disturbing. There are so many factors which you have to consider when it comes to driverless vehicles, safety being the primary one. Recently a Tesla car crashed in Autopilot mode, killing the driver. http://www.latimes.com/business/la-fi-tesla-accident-computer-logs-20180330-story.html At first glance, one would write off driverless technology completely given this incident however in this case, Tesla computers have indicated that the victim’s hands were off the wheel for six seconds before the crash, and he did not respond to visual and auditory warnings from the car. It is possible that the victim passed out behind the wheel, but even so, it still proves the point that the car which was in Autopilot mode, could not safely pull over when the driver failed to respond to warnings.
In the grand scheme of things I do believe we will reach a point where a car can safely pull over if a driver does not respond to warnings but until such technology has been tried and tested and tested again! I do not think we should start promoting driverless vehicles to such a high extent. In the interim, we should be looking at developing current technologies to further boost driver experience but we should also be looking at new technologies to revolutionise motoring for millions. Just as a side note, I also strongly believe we should continue to work on electric cars and phasing out fossil fuel driven vehicles.
The term Internet of Things is an interesting description to use. And in a way its ironic that the phrase was first coined in 1999, long before the majority of revolutionary technologies were born. But the term still is as relevant now as it was back then. It is a universal descriptor of humanities drive to develop and autonomise everyday tasks. Whilst I accept the notion of artificial intelligence becoming a part of everyday life in the future, I do think we should strive for some degree of regulation over such technologies. We cannot be doing with a Terminator situation…
Semiotics is an in-depth concept and one for which a ‘simple definition’ is hard to find. Nevertheless, I have decided to adopt the following definition of semiotics; The study of signs and their overarching meaning in society. In class we recently discussed the origin of semiotics and linked it to a man by the name of Ferdinand de Saussure. Saussure essentially developed a linguistic model which would later evolve into the structuralist model we know today.
A sign can be anything. A photograph, word, object etc. but more importantly, a sign can be broken down into a signifier and signified. An example we used in class to demonstrate this point was using the linguistic sign of ‘Chair’. From this example we identified that the word ‘chair’ was the signifier and the pysical chair in the room was signified. There are endless examples of this kind of makeup of a sign. Because we were raised in a society where structuralist theory prevails, there was no confusion between the word ‘chair’ and the physical object.
As I mentioned above, Saussure developed the structuralist theory but there is perhaps an even more relevant idea that is used today, poststructuralism. Poststructuralism actually evolved from Saussure’s own idea, and it was enhanced by an admirer of his, Mr. Roland Barthes. Barthes was a French philospher who focused heavily on the link between the signifier and the signified. I believe Barthes has paved the way for cultural perceptions in 2018. Allow me to expand; what does a Rolls Royce say about its owner? What does a private jet say about its owner? What does a criminal record say about a person? Well, these are the questions that are asked and can be answered using the poststructuralist idea. Society tends to view those with a criminal record as being people you should avoid, bad influences if you will. Similarly, people who own a private jet or a Rolls Royce are deemed successful in society and in some cases, are looked up to by fellow citizens (young entrepeneurs).
In Barthes poststructuralist world, the sign of the criminal record signifies a persons guilt. Now from here, I’d like to introduce another focal point of Barthes’s studies and that is; connotation and denotation. Here is why it is relevant, continuing on from the previous example, if we look at the linguistic sign of ‘guilt’, then the denotation is the definition of the word: Thefact of beingresponsibleforthecommission of an offense. The connotation then is what the ‘guilt’ relates to, i.e. what the person has done. Again we could disect this further and argue that connotations lead to judgement of others and this in turn can be linked back to the beginning of Barthes’s theory of the sign, signifier and signified.
A final element we discussed in class was the idea of a ‘male gaze’ and from that, we examined some examples in advertising. Now I won’t be talking about modern day feminism because I believe it is has turned into a toxic movement which does not focus on womens rights, and instead focuses on degrading men, I do however hark back to the original feminist movements (right to vote etc). So, in my view, perfume ads are intriguing. Lets take a look at Fig.3 which advertises a perfume for women. Note, for women. Interestingly in this case, what stands out the most is Megan Fox. Not the perfume. Bearing in mind this is a perfume advert, should the model stand out more than the perfume?
I suppose its down to personal opinion. We could also question the man who is staring at Ms. Fox in a lustful manner, what is the purpose of having him in the ad? Well, regardless of whether or not you agree with the connotations here, the fact is Avon sells. The advert works. And now for modern day feminists who scream sexism! I’d like to divert your attention to Fig 4. You see, the roles in the advert have essentially been reversed. It is now the woman who is lustfully staring and touching the man. Personally, I think this is a great advert in terms of imagery and colour used.
At least with Fig 4. they’ve made some attempt to make the cologne eyecatching with its golden colour. Again, Million is a very popular cologne for men, proving that the advert works.
Ultimately, one could argue that Ferdinand De Saussure started what would become cultural behaviour and normalities, that Barthes’s poststructural idea shapes humanities acceptance of class (successful people and non-successful people). Most people give no thought to semiotics, many deal with it subconsciously but I would say that if more people perhaps take some time and think about what they are hearing, what they are seeing, then just maybe society could change its views for the better. Maybe. . .
I would like to begin by breaking down the term ‘new media’ into two sections; new media and old media. For me, new media refers to media that has come about since the invention of the computer. Old media on the other-hand refers to traditional media sources such as print and radio.
The mass dissemination of information nowadays is primarily done so through the internet. The modern day internet really began to take shape in the early ninties, but humanity would have to wait at least another fifteen years before social media websites began to take off. Social Media In my view, social media websites, and for the sake of this blog i’ll be referring to Facebook, Twitter and Snapchat (there are others), are both positive and negative. I suppose the first element to talk about is privacy. I carried out some of my own research which involved reading through the privacy policies of the above companies and I discovered some alarming things that the everyday human would not realise. When you sign up to a social media site, you generally expect your private information to be secured. Many are of the opinion that your information is safe, which is why new users generally skip over the sentence highlighted in Figure 1.
I don’t know what it is but humans tend to skip the terms and conditions of the majoritiy of things they use. The data use policies for Facebook and Twitter both respectively tell us, that they have our permission to share our public information to any worldwide IP. That means your photographs, videos and digital memories can be shared to anyone, anywhere, at anytime, without your knowledge. Is this not an alarming fact? I would have thought so, but given that there are 2.13 billion monthly Facebook users (14%) increase on last year, perhaps I am wrong? You can discover more Facebook facts here: https://zephoria.com/top-15-valuable-facebook-statistics/
Snapchat (Owned by Facebook) is another application that has been heavily criticised for breaching privacy laws. When you think about it, an app that allows users to share photographs and videos sometimes with geo-location enabled with other users is bound to be open to manipulation. There have been a number of blackmail related incidents which involved Snapchat and still there are 301 million monthly users. Even though this is an app which is so exposed to exploitation, people still use it. Does this not prove the point that the everyday person really does not care about their privacy? Does it not prove that people couldn’t care less whether or not their information landed in the hands of a cyber criminal?
The next area i’d like to talk about is the dissemination of information on social media websites. For this section, I will be mainly referring to Twitter. Twitter has not really evolved that much since its inception in March 2006. Yes, we can now post more memes than ever, a gif here or there, but I think the overall design and purpose of the site has not changed significantly. Twitter is more orientated towards sharing information or news. I think because the public can follow real “celebrities” and others, there is a sense of personal connection with the individual. I believe thats the primary difference between Facebook and Twitter. On Twitter, well known individuals such as the President of the United States of America, the Prime Minister of the United Kingdom and organsations like Fox News and the BBC all use the site to share their thoughts and news. Since 2016, many Twitter users have been using the term ‘fake news’ to describe stories they deem to be false. One issue with the term is that some people who simply disagree with a story even though its true, will still brand it as fake news. For me, I think someone should have the right to call out a news agency/politician or celebrity if they share a story which is knowingly false, or one which is discovered to be exaggerated in a fashion that harms another individual or organisation. I believe Twitter is useful in that regard, it gives people a somewhat direct link to these corporations and individuals. It allows them to criticise and debate.
One issue however, is that Twitter is very much open to trolls. People who have no life and spend their miserable hours on Earth, on Twitter, attacking other people for no real reason. Thankfully, the ‘Mute’ and ‘Block’ buttons on Twitter work extremely well. If I step back for a moment and think about social media platforms in the context of the New Media theme, these sites are incredibly useful tools when it comes to disseminating FACTUAL information (news) because of the large membership figures of each site, they are the perfect platform to get a story out there. There are countless examples of stories that have gone viral. Such stories are a real example of the power of social media. Fact or fiction?
Let me begin this segment with a quote from Douglas Adams;
“Don’t believe anything you read on the net. Except this. Well, including this, I suppose.” For me, his quote sums up an element of the internet pretty well. Earlier in this piece, I discussed the notion of fake news, and this question of “Should you believe everything you read?” is similar to the fake news point however there are some subtle differences.
The best real world example I can use for this segment is Wikipedia. A wiki by definition is a website that allows collaborative editing of its content and structure by users. In the case of Wikipedia, anonymous users can edit content and structure, free of charge and with no fact checking involved. This is a problem. Websites like Wikipedia are on par with those who share information based on the editors political opinion (BuzzFeed, which is left wing and Breitbart which is right wing). I accept that nobody or no organisation is perfect and I accept that sometimes they get things wrong. I have no problem when someone or a corporation comes out and admits they got it wrong. I do have a problem with those who blatantly publish a story or share information that is knowingly false, and then refuse to take it down. Linking this back to Wikipedia, I think its biggest downfall is that there is no fact checking software and indeed it is left to the individual user to provide their own fact checking. That in itself has issues, how does one know their ‘facts’ are correct? In my view, I try to take most things on the internet with a pinch of salt. I’m not some bitter crab who spits venum at every single news agency and politican, I read into things first before I decide to share a story or information. Do you work for the CIA? You’re probably wondering why I have sub-titled this segment ‘Do you work for the CIA?’ well, it leads me on to a pretty topical issue, that being government surveillance. Recently, there have been a number of viral videos which show people speaking to ‘Alexa’ the virtual assistant created by Amazon, and asking the A.I. whether or not she works for the CIA. Upon doing so, ‘Alexa’ shuts down without answering. Now this could just be a humourous, delibertate coding inclusion by Amazon to get the conspiracies flowing and even if thats so, it does highlight a key point. Many Governments regardless of their political affiliation, to some extent, subject their populus to mass surveillance. Now, the United States Government for example say they are simply monitoring devices and people who are ‘of interest’ to the FBI and the CIA. The U.K. Government again insists only those people ‘of interest’ are being monitored by MI5 and MI6. However, the overarching issue with this, is the fact that a Government can access your data, phone calls, text messages and much more at anytime without your permission. Fundamentally, I am of this belief; it is morally wrong to access the data of millions of good people, but, I am firmly of the belief that security services must be able to access devices and monitor terrorist suspects. It is a must. So far this year the security services have foiled a number of terrorist plots in the U.K. alone. So what does it all come down to? Individual thoughts on new media are unique, in that, people feel differently about forms of new media. One final point which I feel is relevant is societies need to recognise where media came from, and where it is going. By that I mean the evolution of media and its impacts on humanity. Some of the aspects of new media i have talked about above, but there are many other aspects which could be discussed including media in the next ten years for example. Will print be totally dead? Only time will tell. I believe modern day users of the internet and new media in general must have a clear understanding of what data they are handing over and more importantly what is being done with it.
I am firmly of the belief that creativity is a process; it can not be random. Every one of us is creative to an extent, some people just develop their creative process and original ideas more than others. There is not one defined ‘creative process’ – everybody develops their original ideas differently. I have worked on a number of design based projects which involved creative thoughts and the application of a process.
Depending on deadlines, I usually set aside a significant amount of time to think creatively. This is the beginning of my creative process. I examine the assignment (for this example it is designing a dashboard for an iPad), I then begin to write down possible solutions. As Fig 1. below shows, I frequently make use of sketches during the earlier stages of the creative process.
My creative process also involves consulting with other people on my ideas, but most importantly I adapt and alter my original ideas following feedback and further thinking (revisiting after a period of time). Only after these steps have been fulfilled can I actually create a solution physically speaking. That is my creative process, I am sure it differs from yours.
Although creative institutions (even in Northern Ireland) are failing, the creative class is thriving. As Florida mentions, there are aspects of the creative class which are not suceeding such as print media however when you look at the growth of digital services, the class is growing overall. There has been a gradual shift in the Arts Sector in Northern Ireland from dwelling on professional and super-creatives to looking more at the musicians, artists and performers. This is shown through the increasing number of music festivals and art exhibitions across the country. I believe these people make effective use of their original ideas and are able to follow a clear process to produce a sufficient output. I do not think society is ready to recognise the true value of such creative output. I do think there is another point to note regarding the creative class – many members of that class tend to live in the cities, particular in largers countries like the United States of America. We then see a large agricultural class in the rural areas (to be expected). I feel there is work to be done to perhaps harness some the talent that exists in rural parts of a country through the creation of opportunities.