Research

New Publication: Stage Magic as a Performative Design Principle for VR Storytelling (Summer 2021)

Peer-reviewed journal article for a special VR Storytelling issue of Cinergie international journal was published under a Creative Commons license by the University of Bologna:

Abstract: This article examines The VOID’s Star Wars: Secrets of the Empire (2017) VR arcade attraction, and analyzes the intermedial magic principles employed by co-founder and magician Curtis Hickman to create the illusion of a fictive world with impossible space and liveness. I argue that The VOID (Vision of Infinite Dimensions) functioned like nineteenth century magic theaters run by Georges Méliès and others, by employing magic principles of misdirection that directed player attention towards the aesthetics of an illusion, and away from the mechanics of the effects generating technology. Narrative framing and performative role play transported multiple players into a believable Star Wars immersive experience, creating an aesthetics of the impossible that reflected the goal of many stage magic tricks, and was foundational to trick films in the cinema of attractions of the early twentieth century. Using game studies concepts like Huizinga’s magic circle and theatre arts concepts like Craig’s über-marionette, this article suggests that The VOID and other stage magic approaches to VR, like Derren Brown’s Ghost Train (2017), are a new medium for participatory theatre that incorporate immersive features from both cinema and games.

Keywords: VR Magic Circle; Impossible Aesthetics; Immersion; Space in VR; Liveness.

Citation: Maraffi, C. (2021). Stage Magic as a Performative Design Principle for VR Storytelling. Cinergie – Il Cinema E Le Altre Arti, 10(19), 93–104. https://doi.org/10.6092/issn.2280-9481/12234

Click here to download the full paper PDF from Cinergie…

AI Transmedia Character Design for Games and VR (2020)

I am exploring transmedia character preproduction character designs and concept art using AI tools like Artbreeder. Designs are generated by collaborative interactive evolution algorithms and generative adversarial networks (BigGAN, styleGAN, etc) in “breeding” passes where I select children, adjust “genes” or features, and cross-breed the results. I am interested in using AI to explore the design space for iconic characters like Frankenstein or Sherlock Holmes, using seed imagery from popular media, to create new and diverse representations for game narratives. Here is an AI generated design for Sherlock Holmes that has features from eleven different media representations:

Sherlock AI Diversity, Maraffi 2020
Sherlock AI character designs from cross-breeding features from 11 media representations, and then adjusting genes for more diverse game characters (Topher Maraffi, 2020).
Sherlock AI Breeding, Topher Maraffi 2020
Sherlock AI cross-breeding and selection tree from Artbreeder (Topher Maraffi 2020).
Sherlock AI Concept Art, Maraffi 2020.
“Sherlock AI” concept art (all graphic elements created through AI “breeding” and artificial selection at artbreeder.com, Topher Maraffi 2020.
Frankenstein Breeder, Maraffi 2020
Frankenstein creature designs generated by AI breeding with source imagery of Boris Karloff and Rory Kinnear, and then combined into a male and female hybrid (Topher Maraffi, 2020).
Frankenstein AI Breeding, Maraffi 2020.
Frankenstein AI cross-breeding tree with Boris Karloff and Rory Kinnear seed representations (Topher Maraffi, 2020).
Frankenstein AI Concept Art, Maraffi 2020.
“Frankenstein AI” concept art (all graphic elements created through AI “breeding” and artificial selection at artbreeder.com, Topher Maraffi 2020.

We are also exploring GAN breeding for historical figure visualization, so that we can create full color, realistic, 3D designs from black and white photographic and painted portraits. We are using this technique to visualize Harriet Tubman and Frederick Douglass as the first step in an AI-based 3D character creation pipeline for our NEH funded Mitchelville AR Tour project:

Harriet Tubman Breeding Tree, Maraffi 2020
AI breeding tree that shows how target features in the image were created by analyzing historical photos, and then refined by manipulating genes, cross breeding lines, and selecting children that fit the desired Harriet aesthetic.
Harriet Tubman AI Examples, Maraffi 2020
We were able to create young, middle aged, and old versions of Harriet by manipulating age genes and cross breeding from different photos.
Frederick Douglass Breeding Tree, Maraffi 2020
We wanted to develop full color characters that would resemble the originals, and be suitable for using other AI tools to generate rigged 3D character models.
Frederick Douglass AI Examples, Maraffi 2020
Because Frederick Douglass and Harriet Tubman are not in the BigGAN database the images generated from the process are more general than the original photos, but they can be rendered from multiple angles and still resemble the source.

NEH Funded XR Research Project: Mitchelville AR Tour (2019-20)

We received a National Endowment for the Humanities (NEH) Digital projects for the Public “Discovery” grant and a Walter & Lalita Janke Emerging Technologies Fund seed grant to begin design work on an augmented reality tour of Mitchelville, a historic site on Hilton Head Island SC which was the first Freedman’s town in the US during the Civil War, and a Gullah-Geechee heritage site today. We will be creating a 360 experience of the story of America’s first efforts at Civil Rights for African Americans during the Reconstruction period as it relates to the Port Royal Experiment, with life-sized historical figures like Harriet Tubman, who will be experienced on site with Magic Leap headsets and mobile phones.

Mitchelville AR Tour Web Design 2020
Mitchelville AR Tour Web Design by Topher Maraffi (2020), 3D models by Ledis Molina and James Jean-Pierre, historical images courtesy of the Library of Congress.

We have also applied to an NEH Digital Projects for the Public grant to fund development, and are collaborating with The Mitchelville Preservation Project, Reconstruction Beaufort, Penn CenterFort Lauderdale Museum of Discovery & Science, Daruma Tech, and researchers from University of South Carolina Beaufort, North Carolina State University, and Coastal Carolina University. This project is supported by educational partnerships with Magic Leap and the FAU Center for Body, Mind, and Culture. Read the “Discovery” proposal, NEH Mitchelville Grant Submission 2019 Maraffi, and “Prototyping” design document: Mitchelville AR Tour Design Doc, 2020

*Click here to go to the Mitchelville AR Tour Project Page

Funded XR Research Project: Autonomous Car Study (2019)

BrandonAlberto_Driving_header1

We received a Dorothy F. Schmidt College of Arts and Letters seed grant to begin working on an interdisciplinary project to develop a driving game simulation using the Magic Leap One headset that will be used to inform autonomous vehicle design (in collaboration with Computer Engineering faculty Dani Raviv, Hari Kalva, and Aleks Stevanovic). Our game simulation will track what highly ranked drivers do with their eyes and bodies to successfully maneuver a vehicle in an urban environment. MTEn MFA students Alberto Alvarez and Brandon Martinez will be developing this project throughout the 2019-2020 academic year. Here is a video of a smaller VR car project they created for my spring 2019 Interactive Interface Design course:

XR Research Project: Magic Murder Mystery Theatre (2019)

ML_Murder-MysteryViz4

We are designing a spatial computing app for the Magic Leap One headset that is a take on the classic murder mystery live improv-theatre game. A virtual body will be placed in the environment, with clues that can only be seen when wearing the headset. Animated avatar faces and costumes will be superimposed on the players to hide the headsets and create a theatrical aesthetic. Suggested lines will be generated in the HUD to drive the narrative. This project is part of our lab’s Performatology research, which applies performing arts and animation theory to interactive media.

XR Research Project: Climate Reality in South Florida (2019)

I have applied to the Knight Foundation Art Challenge 2019 for Miami area to create a virtual gallery experience that uses Magic Leap headsets to better understand the impacts of Climate Change in South Florida:

Description: “The Miami art gallery appears empty, with spotlights on blank walls and a sign reading “Climate Reality in SoFlo”. Three pedestals contain Magic Leap headsets. When wearing a headset, the walls reveal spherical images, virtual “tiny world” photographs of South Florida sites. You hear water sounds as you approach an image of downtown Miami, when it suddenly surrounds you as a 360-degree video. Water rises in the 3D environment until it reaches your waist, and as the vision fades, text informs you of projected sea level rise at this location. Along with other people wearing headsets, you notice virtual figures in the gallery. One of the shadowy figures reaches out towards you, and a climate related question appears. This art experience connects people to place and people to people through the issue of Climate Change using augmented reality headsets that communicate between a gallery in Miami and on FAU campus.”

Joiner and 360 Digital Imaging Research (2018-2019)

Exploring David Hockney’s “joiner” photo collage technique led me to working with 360 imaging technology. Hockney’s technique applied Cubist concepts of manipulating time and space by compositing hundreds of photos of a staged scene into a single 2D image. I created a joiner composite of my USCB Digital Imaging students playing hopscotch outside of our college building in Spring 2018, and it resembled a 360 panorama shot when finished.

“Hopscotch” joiner demo for my Spring 2018 Digital Imaging course that resembled a panorama photography shot.

Hopscotch_Maraffi_Joiner_Pana_web

The process of creating a joiner is like drawing with photographs, and adds dimensions of time to a collage through the seamless blending of many photographic moments. In regards to space, when you composite enough images captured in all directions, the logical result is a 360 degree panoramic image, which can then be wrapped on it’s polar coordinates to create a continuous spherical perspective.

“Hopscotch World” 360 joiner image created in Photoshop and printed on canvas, Spring 2018.

HopscotchJoiner_360print_TopherMaraffi_2018

The new aesthetic of 360 photos and videos is only possible by using software to stitch many images together, which is very similar to Hockney’s joiner collage technique. The resulting composite can be flattened out on a 2D plane to create a “tiny world” image, or it can be wrapped on the inside of a 3D sphere to create a virtual environment. I am now using the new 360 video cameras to create spherical joiners that capture a sequence from different positions in space, such as this one of my cat walking around our patio.

“Miracle Joe Searching for Che Che” 360 joiner from HD 360 video sequence, Spring 2019.

JoeyCat_Plantation_360print_Joiner_TopherMaraffi_2019

The 360 stitching process can produce some interesting artifacts when blending moving figures, which I then enhance in the digital compositing process.

“The Southernmost Joiner” 360 joiner of Key West, Fall 2019.

KeyWest Panorama 360 Joiner Composite

I process these 360 digital composites to look like natural media, and then print some of them on canvas or paper to enhance with ink and acrylic paint.

Savannah_360_Park_Painting_TopherMaraffi_2018

Broadcast Design Research (2016-2017)

My filmmaking and site-specific AR research at USCB was related to my Broadcast Design courses, and involves collaborating with the local Gullah community and USCB faculty from several departments to produce the documentary film Jumpin’: SC Roots of Swing Dance. In 2016, I received a Sea Island Center grant to do script research and pre-production designs for the documentary and a teaser video on the look of the film.

“Jumpin’: SC Roots of Swing Dance Teaser” (2017) documentary design.

Over the summer, I shot green-screen footage with one of my Broadcast Design independent study students, and created motion graphics and 2.5D visual effects shots for the video. I am currently writing an NEH Media Development Grant for the same project that includes development plans for a site-specific augmented reality app that will visualize historic locations and interactively deliver educational content. My background as a social dance instructor of over fifteen years also informs content development on this project.

Jumpin' Research Poster, Maraffi, 2017.
Jumpin’ Research Poster, Maraffi, 2017.

In 2016, I directed my Broadcast Design class in creating 2.5D effects on photographs of local landmarks to visualize sea level rise for an SCETV production titled Climate Change: A Local Focus. Here is an award winning student research poster describing the project:

Broadcast Design Research Poster 2016.
Broadcast Design Research Poster 2016.

Some of the finished 2.5D visual effects shots of flooding in Beaufort. The background is a flyover capture inside Google Earth with six meters of flooding added:

___________

Digital Imaging Research (2016-2017)

Research related to my Digital Imaging course focuses on exploring David Hockney’s “joiner” collage technique in digital media, extending Cubist concepts into 2D compositing, 3D virtual space, 4D time, and 5D interactivity. My work also explores the relationship professional artists have always had to technology, referencing the Hockney-Falco thesis that photographic aesthetics started in the Renaissance when painters started experimenting with lenses and mirrors.

My approach traces an aesthetic line from Renaissance painting through the development of photography and moving pictures to contemporary video games. This includes research on practice-based arts research and pedagogy for better teaching technology in the classroom.

  • Download my Electronic Visualization & the Arts 2016 paper on this practice-based media arts research… EVA 2016 Paper Maraffi

2016

In my 2016 Digital Imaging course, we crowd sourced photographic imagery of a simple scene, and then developed 2D Hockney-style joiners in Photoshop, and then designed a 3D joiner representation of the scene in Blender 3D software.

A student research poster that explains the 2016 class project:

Digital Imaging Poster, Maraffi, 2016.
Digital Imaging Poster, Maraffi, 2016.

A video demo of the process and some experimental 3D joiner animations:

I had the students design a “seek-and-find” 3D video game that used a photo-collage metaphor for the shooter mechanic, and which captured player-generated joiners for printing.

Here is my class demo for an experimental joiner video game created in Blender and Unity software:

Some image captures from a play-through of the joiner game:

2017

In 2017, we again started with crowdsourcing photos of a simple scene, and then took the joiner practice-based research towards a painting aesthetic using filters in Photoshop. We also changed the process to alternate between 3D and physical media, unwrapping 3D textures to print them on paper and then draw and paint on top of the prints to then re-scan them into 3D textures. The scenes were then rendered and printed to create physical mixed-media collages. Here is a student research poster for the 2017 class project:

Digital Imaging 2017 Research Poster
Digital Imaging 2017 Research Poster

A couple of 2D and 3D studies that were class demos, and stages in the development of my final print on canvas featured in our 2017 Faculty Exhibition:

My final print on canvas that was layered with physical media, and shown in our annual faculty exhibition:

“Lab Monitor” (36”x28”), digital “joiner” collage printed on canvas and enhanced with ink, colored pencil, and acrylic paint, Summer 2017.

Maraffi Painting 2017 2

__________

Media Design (2015-2016)

This creative work started as a digital compositing demo for my Media Design course, and evolved into a finished mixed-media piece that was featured in our 2017 Faculty Exhibition. The Persistence of Vision (POV) theme refers to the nineteenth century concept that gave birth to animation and cinema. The term described the visual phenomenon that a sequence of images played at a sufficiently fast rate creates the illusion of continuous motion. POV also references a continuity of aesthetics in old media and new media, so that painters like Edward Hopper can be related to cinema artists like Alfred Hitchcock.

“Persistence of Vision: Coasting Ahead, Hitch Throws Hopper the POV Sign” (42”x26”), Media Design digital compositing demo with samples of Alfred Hitchcock and Edward Hopper imagery, printed on canvas and layered with acrylic paint glazes, Fall 2017.
Persistence of Vision, mixed-media, 2017.
Persistence of Vision, mixed-media, 2017.

My piece references how Hopper was inspired by early cinema, often going to the movies for weeks at a time, and how Hitchcock staged scenes that echoed the look of a Hopper painting. Elements from both artists were processed and composited in Photoshop, and then printed on canvas so that layers of acrylic glazes could be applied over the surface.

My Media Design courses often reference the relationship artists have historically had to technology. This piece also started as a demo in Media Design, with the theme of STEAM (Art+STEM), and relates Disney and Dali to Muybridge and the birth of cinema.

“STEAM: Disney & Dali Have Tea at the Beaufort World’s Fair” (17”x22”), Media Design digital compositing demo with samples of Salvador Dali, Walt Disney, Marcel Duchamp, and Harold Lloyd imagery, printed on matte paper and clear acrylic sheets, enhanced with physical media and affixed to layers of glass, Fall 2016.
STEAM, mixed-media, 2016.
STEAM, mixed-media, 2016.

Featured in the 2016 Faculty Exhibition, layers of the digital composite was printed individually on matte paper and acetate, drawn on with mixed media, and then affixed to several layers of glass to create a 2.5D parallax effect. The presentation reflects the technology Disney invented, the Multi-plane Camera, that revolutionized animation in the 1930s.

__________

Video Game Design (2016)

Research related to my Video Game Design courses at USCB is on applying a modified MDA (Mechanics, Dynamics, Aesthetics) design framework to the design of a USCB Golf game in the Unity game engine.

Experience-based MDA Game Design, Maraffi 2014
Experience-based MDA Game Design, Maraffi 2014

Since golf is so important to the region due to Hilton Head being the location for the Heritage PGA Classic, in 2016 I had my video game students design an island aesthetic for a 3D adventure golf game. Media Arts and Computational Science majors collaborated on modifying a shooter mechanic to create the metaphor of a golf shot for our demo hole, with power-ups and standard golf hazards. Here is a video play-through of the demo:

The demo level in Unity, and an award-winning student research poster on the design of our USCB golf game:

In 2017 they worked on fantasy levels, each with a different technical and aesthetic challenge, such as an interactive navigation HUD and a floating tower-defense dynamic.

__________

Graduate Animation-Games Research (Performatology 2010-2013)

UCSC Masters in Computer Science (MSc) Games Research (2010-2013)

Performatology is a research area and critical approach to game theory that I developed for my UCSC graduate thesis in Computer Science, and which led to a computational method for studying artistic forms of gesture. The concept was developed as a critique of more common areas of game studies such as Narratology and Ludology, which focus on story and play, but underrepresent the importance of artistic figure representation in interactive media. As games have moved towards interactive cinema, like the latest Tomb Raider or Uncharted titles, it becomes increasingly important to address problems related to generating a dramatic gestural performance by virtual characters. Performatology was developed in Arnav Jhala’s lab, the Computational Cinematics Studio, which was part of the Games & Playable Media research center at UCSC.

Performatology 2013 Research Poster.
Performatology 2013 Research Poster.

We conducted a study in our lab to quantify performative gesture quality using motion capture data from a Kinect dance game and machine learning. Our algorithms were based on principles of animation and acting used by Disney animators and live performers to create interesting figure poses. Watch a video demo related to our paper:

ICIDS Performatology Poster 2012.
ICIDS Performatology Poster 2012.

UCSC Masters of Fine Arts in Digital Arts & New Media (MFA) Animation Research (2008-2010)

Mimesis & Mocap Show 2010.
Mimesis & Mocap Show 2010.

My UCSC MFA DANM thesis was on using motion capture for live performance as the fulfillment of Edward Gordon Craig’s Uber-Marionette concept, and audience reaction to a live stage performer interacting with their digital double. I performed dance and classic mime gags with a life-sized digital double in two live stage shows, Stop The Press and the 2010 DANM Thesis Exhibition, along with other performative media like a giant eyeball projection animated live using a game controller.

Video documentation of both my Performative Media Group performance in UCSCs Production of Stop the Press, and my DANM thesis performance:

A demo of the Maya 3D interactive eyeball that was performed by actors on stage using a game controller and projection during Stop the Press!:

__________

Books (2000-2008)

From 2000-2008 I was the Course Director of the technical animation courses at Full Sail University in Orlando. During this period I wrote three popular books that were on Disney Animation’s recommended reading list.

Mel Scripting a Character Rig Book 2008
Mel Scripting a Character Rig Book 2008
Book Review
Book Review

__________

Selected Creative & Professional Work

Poster for our USCB Film & Digital Media Symposium in 2017:

Animation for USCB Center for the Arts stage production of Little Shop of Horrors in 2015:

Paintings, animations, game design, and mixed-media work for USCB Faculty Exhibitions 2014-2016:

I worked as a staff broadcast designer and animator for NBC O&O networks in New York City and Fort Lee NJ in the 1990s, where I designed show opens, bumpers, and news graphics for CNBC, America’s Talking, and MSNBC. I also freelanced for several companies like Balsmeyer & Everett and the GT Group, where I worked on special effects for Woody Allen’s Everyone Says I Love You, titles for The First Wives Club, and show opens for ESPN2.

Categories: Research

tophermaraffi

I am an Assistant Professor of Realtime VFX and Virtual Production at NCSU, where I teach game design, 3D animation, and extended reality courses in the Art + Design Dept.