ACM - Computers in Entertainment

Artist and Audience: Emerging the nano-entertainment experience

Guest Column
By Anthony Brooks

Artist and Audience: Emerging the nano-entertainment experience
…ultimately, it is the intellectual vision, transposed into the work step by step with technology as its reference, that remains the core of a virtual work of art. —Oliver Grau [1]

Inhabited Information Spaces: Living with your data [2] outlined a vision of a future where virtual entertainment experiences would not be dependent on hardware as we know it today [2]. Screens, projectors, or 3-D glasses/head-mounted-displays (i.e. HMDs) to convey visual information would not be needed. Speakers, headsets, or earplugs to transmit auditory stimulus would also not be needed. The same would be true for other senses such as haptic, olfactory, and taste. Detailed in the author’s book chapter in Inhabited Information Spaces is the prediction that in the not too-distant future, technology will have advanced to such a point that an invisible nano cloud engulfing the body would address each specific sensor organ stimuli requirement through embedded intelligence and communication pathways that could be programmed according to an individual’s profile of needs, preferences, and desires. The concept was coined as Virtual Interactive Space (VIS) originating from a body of research from 1985 exploring ICT-based entertainment as an alternative healthcare intervention [3, 4].

The idea built upon the concept of “Utility Fog” (or Ufog), a hypothetical cloud of micro-scale robots that work in tandem communicating to achieve certain functions [5, 6, 7, 8, 9]. The VIS extended the Ufog concept such that in the future the nano/ion embedded AI would be programmed to detect the human sense organs that the layers immediately adjacent to the human were to address and stimulate. Through neighbor associations, distinct and direct media would be transmitted to achieve stimulation of the human experience according to inputs from a noninvasive non-worn brain/mind activity sensing device, which dictated the experiences via the nano transmissions. In this way the human afferent-efferent neural feedback loop closure that is evident in today’s interactive virtual/mixed/augmented/artificial environments is optimized [e.g. 14].

Associated to VIS is what Laban referred to as the “Kinesphere,” a 3-D volume within which the human body is located wherein the extent of a human’s range of motion is represented [10].

VIS is envisaged to one day become inhabited by micro-entities in a similar way predicted for Utility Fog. These entities can be envisioned as tiny, self-replicating robots with enormous computing power that are invisible to the human eye. Each “foglet,” as they were referred to in Ufog and now coined as “VISlets,” is made of a nanocomputer and telescoping arms 6 to 10 millionths of a meter in size. How it works is that located directly next to each of the human’s sensing organ is a specific array of these VISlets (or intelligent nanobot clusters). Each array is programmed to address the full range of potential stimuli required by each organ. In this way each array is programmed to deliver specific content with unparalleled resolution optimized for the host human. Responses to the delivered stimuli are registered and processed in the embedded AI of the central nanobot engine located as a microscopic dot on or in the host human. This micro-binary discrete manufactured nano-brain collaborates with the host human’s natural brain and body functions. It analyses all aspects of the human.

Each individual’s computer-generated VIS environment ensures the desired cognitively driven mixed-reality experience. Human responses inform the environment for refinements to be implemented.

Activation is at the discretion of the end-user; so yes, there is an off button!

The concept of multitudes of bidirectional communicating nanobots constituting a human’s personal VIS and being used for “entertainment” is based on how such devices are invasively-friendly to the human. Breathing is optimized through such a swarm of external nanobots as they monitor levels of inhalations and exhalations. Thus, nanobots both in and outside your body communicate to realize never-before witnessed experiences that give “artists” opportunities to take entertainment to an advanced level.

Taking the concept of extreme experience one step further and to exemplify how, rather than passively observing the incident on TV from the comfort of a sofa, one can get into the action. Imagine the situation where the nanobots secure the host human in a protective invisible VIS cocoon [6], which the nano programming simulates as a helpless wildlife feast in order for the human (as audience) to experience a ferocious attack e.g. by polar bear/shark/lion. Such an opportunity for experiential entertainment with the human inside being stimulated to experience the hungry beast trying to break down the nano-barrier preventing any harm coming to the human who is experiencing the “happening.” The associated aesthetic value of such an experience can be argued in line with Bertrand Russell’s acquaintance principle as discussed in Wollheim [11, 12]. Thus, rather than being a passive viewer or one relying on description, the human can safely experience the “almost” real-world aesthetic through direct acquaintance like never before. In many ways such a VIS concept opens up the world of entertainment to have almost endless horizons addressing the future generations of artists and audiences.

In such extreme examples of created adrenaline rushes, bodily functions (neural, physical, cognitive, etc.) are monitored and reacted upon to ensure in-experience well-being and optimal experience. So for example, the unconscious change in the size of pupil dilation according to response to visual stimulus is detected and automatically communicated to the nanobot AI engine, which then adapts the stimulus to neural triggers that indicate desired effect. This causal feedback process results in self-generated subliminal stimulus. Multimodality of stimuli collaborates through complementary algorithms having embedded sub-system intelligence. Virtual reality, computer gaming, contemporary art and other entertainment as we know it today will be redefined through nanoVIS.

New findings in neural science (e.g. advanced scanning techniques) will most likely detect similar stimulus response indicators as the pupil dilation example described above. This will be collaborated with the human embedded nano-engine dot. In this way, new platforms for advancing the field through trans- (cross-/inter-/multi-) disciplinary collaborations from human (natural) and technical (created) disciplines will increasingly become evident. The question beckons will they measure up to the task and will the powers that be permit it, considering the potential of such advances in the “wrong” hands? Soon, art (if the term is still to be used) will change forever through such instruments as nano: The potentials for artist and audience are mind-boggling. The future “scientist as artiste as scientist” will emerge to entertain in line with predictions in an article titled “Body Electric and Reality Feedback Loops: Virtual Interactive Space and Entertainment” [13]. Will it be that in the future terms such as virtual reality, artificial reality, augmented reality, mixed reality… etc are outmoded to collectively be reformulated as “nano-reality,” where the physical and virtual reside alongside and embedded as our minds, body and enveloping environment. If terms (that often confuse) will be used at all and we just rely on the “nano-entertainment experience” as a sufficient enclave to visit when the desire and fancy takes us.

References

[1] Grau, O. Virtual Art : From Illusion to Immersion. MIT Press, Cambridge, 2003, 257

[2] Brooks, A.L. SoundScapes. In Inhabited information spaces, living with your data, eds. Snowdon, D. N., Churchill, E. F. and Frécon, E. Springer, New York, 2003, 89–100

[3] Brooks, A.L. Virtual Interactive Space (V.I.S.) as a Movement Capture Interface Tool Giving Multimedia Feedback for Treatment and Analysis. In: The 13th International Congress of the World Confederation for Physical Therapy. Science Links Japan, Yokohama, Japan, 1999;

http://sciencelinks.jp/j-east/article/200110/000020011001A0418015.php

[4] Brooks, A.L. Mr. Beam. Journal of the European Network for Intelligent Information Interfaces (i3net), 10, 1 (2000), 2-6.

[5] Hall, J.S. Utility Fog: A Universal Physical Substance. Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace. NASA Conference Publication CP-10129, Westlake, OH, 1993, 115-126

[6] Hall, J.S. Utility Fog, Part I. Extropy 6, 3 (1994),16-20

[7] Hall, J.S. Utility Fog, Part II. Extropy 7, 1 (1995), 7-12

[8] Hall, J.S. Utility Fog: The stuff that dreams are made of. In Nanotechnology: Molecular Speculations on Global Abundance, ed. Crandall, B.C. MIT Press, Cambridge, 1996, 161-184; http://www.kurzweilai.net/meme/frame.html?main=/articles/art0220.html?m%3D7

 [9] Avaneendran, A. Utility Fog. Seminar 2004 Report, No: 01609. Government Engineering College, Thrissur, India, 2004.

[10] Laban, R. Modern Educational Dance. Macdonald & Evans, London, 1963.

[11] Russell, B. Knowledge by Acquaintance and Knowledge by Description. In Proceedings of the Aristotelian Society 11 (1910/1963), 108–128; http://www.hist-analytic.org/Russellacquaintance.pdf

[12] Wollheim, R. Art and its Objects. Cambridge University Press, New York, 1980.

[13] Brooks, A.L. Body Electric and Reality Feedback Loops: Virtual Interactive Space & Entertainment. ICAT2004, Seoul, Korea, 2004; http://www.idemployee.id.tue.nl/g.w.m.rauterberg/conferences/ICAT2004/SS2-6.pdf

[14] Brooks, A., Hasselblad, S., Camurri, A., and Canagarajah, N. Interaction with Shapes and Sounds as a Therapy for Special Needs and Rehabilitation. In the Fourth Int. Conf. On Disability, Virtual Reality, and Associated Technologies. 2002, 205-212.

Copyright © 2019. All Rights Reserved