From PULSE 2016: WTOC film stock and news story:
VR in the Gallery
This new line of creative inquiry is based partially on the past success of a project titled, VR Keith, which was presented at the SIGGRAPH (Association for Computing Machinery's Special Interest Group on Computer Graphics and Interactive Techniques) International Conference in 2001. VR Keith was a series of artworks exploring a virtual character, which appeared as an animated interactive video-robot that moved about a gallery or exhibition space. As the "N-Space" exhibition brochure described it, "VR Keith 2.0 explores the interface between interaction, performance, and avatars. Using comedy, irony and cheesy quotes from cultural theorists, VR Keith interacts with the gallery audience in a direct fashion. VR Keith argues how much better virtual and robotic humans are than real ones."
VR Keith's success built upon dynamic interaction with audiences. VR Keith would give "tours" of artworks, directly interact with audiences, perform dance sequences, and more. VR Keith was subsequently featured in several exhibitions, media outlets and academic publications. At the time VR Keith was produced, the available technology limited the robotic character to being "driven" by an operator although most audiences simply assumed that VR Keith was an autonomous A.I. driven system. The primary limitation of this artwork was that it was not autonomous, and required a "handler" whenever the artwork was exhibited. The artwork gained huge amounts of positive attention from the approximately 30,000 viewers during its most successful international exhibition at SIGGRAPH in 2001. Since then, I have continued to retain my interest in artificial intelligence research. As my own research has evolved through other lines of inquiry, I have incorporated related topics within other interactive artworks, science education projects, as well as within my Animation courses.
The A.I. Portraits series of interactive artworks:
The artworks use a variety of tools and systems in their creation and presentation. Open-source A.I. systems within the Unity Game-programming framework will serve as a primary tool for the works. Design and modeling, as well as personality profiles, and histories of the characters will be generated by various game design and production strategies, including tools like 3D scanners, Blender and Maya modeling and texturing. Unity Programming will handle personality aspects, audience input and animation/simulation control. Motion Capture techniques will provide life-like animation gestures and movements. Microphones, cameras and Kinect 3D scanners will give the audience a great amount of input possibilities to evoke reactions and discussion with the A.I. Portraits.
The 3 separate artworks each explore a distinctly unique A.I. Portrait concept:
The first interactive artwork involves the use of VR headsets (Oculus Rift) and explores a more immersive interaction within an intelligence-activated landscape where the audience can explore a dynamic environment and experience a landscape as a reactive entity. The abstracted landscape of a forest at night is adapted directly from maps of local forest within the Appalachiacola National Forest. 3D forest sounds recorded in the ANF and custom models are based on night-time forest environment.
The second interactive artwork involves a more-traditional “portrait painting” scene where the audience can have a direct discussion with the A.I. portrait entity using a keyboard and dynamic text.
The portrait asks directed questions to audience members and gives appropriate or surprising responses based on a dynamic artificial intelligence system and creatively scripted interactions. The answers from the entity are based on a custom 20,000 word A.I.M.L database developed by the P.I.
The third artwork is comprised of a group of “animated entities” which can respond to the physical presence and movement of either one or two audience members at a time. The A.I. entities within the scene react to behavioral cues. Using a Kinect 3D scanner, the artwork can determine body gestures and movement patterns like “aggressiveness” so that the entities can respond in a reactive/intelligent way.
Earlier works from the archive:
A mobile robotic video character who leads audience members on gallery tours, dissing the art, and undermining any seriousness he comes across. Plus, he loves glam-dance.
Participants use a computer inter-face to interact with Litt'l havoc by physically moving the shopping cart that contains it. Pushing the shop-ping cart propels the semi-nude artist as he pushes his own cart through absurd environments of old postcards, NASA space imagery, and historic street scenes of Florida.
Interacting with Litt'l havoc, participants symbolically become the artist, transgressing cheesy landscapes of abandoned and fragmented memory. Constructed of found, borrowed, abandoned, and confiscated components, Litt'l havoc is a pathetic new breed of hack/junk/found interactive art.