Comment

Adventures in AI - Week 1

Adventures in AI - My first week with Cuebric

A few weeks back, I was honoured to be one of a few selected artists to try out Cuebric. I finally got hold of my copy at the beginning of week. It’s been a busy week, but I managed to get a few hours in.

First impressions: Cuebric runs as a web interface. It is a great introduction to image generated AI. It has integration with XR stages (#Disguise) and can be used to create 2 1/2 D images in #Unreal.  It has a GUI that is relatively intuitive although some features are somewhat hidden (like exporting a depth map of an image). Part of it uses layers in a similar way to Photoshop and you can hide and merge layers. There is easy access to the tutorials and it is relatively straightforward to figure out. Prompting and the tools have links to tutorials in the GUI which is also very helpful. The AI generation part seems to be similar to the basic text to image part of most diffusion systems, but in a simpler and more user-friendly interface. The build seems to change daily and even during the week I noticed some improvements.

The web-based software is pretty stable but does crash sometimes especially when upscaling and segmenting. Also it gets unstable when you leave it running for too long. 

One thing I really loved was the history that can call up all of your images with the prompts you originally used. This feature really helps in branching from something that you created that was interesting and should be almost a universal thing so you can keep track of your prompts.

For this one month experiment I am working on a personal project - the story of my father (which involves most of the history of the nuclear bomb). Much of my work this week was based on things that happened between 1925-1945 in China and probably because of the time period and the limited number of images I can see the limits of the AI diffusion as many of the images that are being generated are relatively similar and stereotypical in style and seem to draw from a limited pool of sources that don’t take in the cosmopolitan nature of Shanghai and the French and international concessions in to account. A lot of results also came out in black and white until I put it in to the negative prompts.

Where the image generation seems to struggle - at least this first week - and this may be due to bad prompting from me, or just the limited amount of training it has had in prewar Shanghai images, is with people. I don’t think Cuebric was necessarily thought of as a software for this kind of image generation but more as a tool to create backgrounds to integrate with real actors on an XR stage, but since I am using it as a concept generation/trailer generation software I do need some people and found that they often were misformed or cloned. I also felt that the images were less cinematic than I would have liked even though I kept prompting for wide and extreme wide shots. 

My first prompts were related to the old city of Shanghai where my father grew up. The images were not bad, but a lot of them were small variations on the same theme. Changing the seed, GFC scale, sample method or prompt gave me very similar results.

Where Cuebric shines and where it stands out from other AI generation platforms is in creating and cutting out quick masks. Cuebric has a workflow that is to create an image you like. You then cut out the elements you want to have as foreground, middle and background. This is very easy and until recently something that was very painful roto work - Cuebric makes it simple by cutting by object or using a depth map. I would have loved to be able to be a bit more precise on these masks - often small things like leaves or fingers are left out and a way to add or subtract in object mode would have helped a lot to avoid this. To be able to do a parallax effect you have to fill in the holes in the background plate once you have cut out the elements you want to have as foreground and middle ground. You can do this using the inpaint tool. I didn’t find the results that great, so I ended up doing it in photoshop and reimporting that image in Cuebric (the import function is very useful). 

Cuebric generates all the images in a relatively low resolution. The process is to first create an image you like, then slice it up in layers, fill in the background, and then up-rez each layer in a process called superscaling. Superscaling actually works really well (but can be a bit buggy). Very much like Topaz AI - you can take a layer and scale it up to 4x the resolution (which still is less than 4k). Strangely enough you don’t seem to be able to scale more than one layer at a time so if you have several layers it can take some time to do and that almost always was buggy. I didn’t seem to be able to super scale my original image first and then do the cut outs - but instead had to start from the lower resolution image, cut out, and then scale each layer.

You can then test what it looks like by using something called parallax preview that slides the layers over each other - creating a “fake” 3d effect. The same tool can also give you a depth map which you can use in Unreal or any other software to create an extrusion of your image. 

You can then export the layers to Unreal, After Effects or whatever software you are using. I ended up using the depth map as an atmospheric mist and created my depth using Zoedepth using the image generated in Cuebric adjusted in Photoshop and then put it all together in AE.

This was one of my first tries. Image generated in Cuebric (Old city, Shanghai, inner courtyard) - Image is generated in Cuebric and layers are cut out in Cuebric. The background was fixed in Photoshop. The depth is from Cuebric, but the 3d depth was done in Zoedepth. All animated quickly in AE.

Thoughts so far: It is a fast process to create something that is “good enough” to show an idea. It can be a great tool for creating animatics - you could, if you wanted, drop in your own images and quickly uprez and segment them for example. It also combines a few things in to one “app” - such as segmenting, creating depthmaps etc. However I miss the control I get in ComfyUi and Photoshop.

I’m grateful to have a chance to try out this tool. It allowed me to catch up on AI and allowed me to play around a bit with Pika and ComfyUI. As for Cuebric I understand it is still being tinkered on and I think it very quickly can become a lot better. So far I am a bit underwhelmed with diffusion and how it handles people and the more stereotypical results of period China could be a lot better, but that could be down to bad prompting, learning using the software, or just the type of project I am working on. Just for context I tried using ComfyUI and some images came out “better” and others came out “better” Cuebric. See below

The plan is to get a few images I am happy with and then try running them through Runway or Pika and see what happens.

——

An image from the 128 incident, the Battle of Shanghai Shanghai January 28, 1932, Cuebric

Another one from Cuebric with weird cloned kids in the background

Here is the same prompt in ComfyUI (similar issues with limbs but more interesting lighting)

I had a bit more success with Aurora College in the French Concession (although it does not quite look like it should). The results from Cuebric were better than the ones from ComfyUI (without any “fixes”)

Comment

Comment

Behance highlights IGA in their 3D Art Section

I'm proud and honored that Behance chose to highlight the IGA commercial I directed in their 3D Art section. Their curatorial team features a small number of projects to appear on the front of their gallery each day and only pick the best work that effectively promotes the 3D Motion community.

https://www.behance.net/gallery/184443769/IGA-Holiday-2023-Mr-Beavers-Yule-Log?tracking_source=curated_galleries_3d-art

It is also on my very own website - right here:

Comment

Comment

Big news about my availability

After more than five years at Moment Factory, I have decided to go back to freelancing and will once again be available for work.

I am leaving full of gratitude for everything I have learned during my time there. Not only have I gained knowledge about the technology involved in projection mapping, interactivity, and working in real-time on enormous canvases, but I have also developed skills in creative and client management and leading creative teams.

I am also grateful for the experiences and amazing people I have met along the way, including both clients and collaborators, and feel fortunate to have been part of such a large and inspiring group of creatives who regularly engage in self-reflection about the creative process.

To those I have met along the way, I will miss you and hope we can collaborate again. To all future collaborators, I am excited to apply the knowledge and ideas I have gained

Comment

Comment

AT&T and Resort World Las Vegas win awards

Moment Factory picked up some awards at the Digital Signage Experience last week in Vegas for two projects I’ve worked on - Resorts World Las Vegas and AT and T Discovery District. Prizes in Hospitality and Public Spaces but also Digital Signage Experience of the Year (AT and T) and Digital Signage Content of the Year (Resorts World)

Congratulations to everyone that contributed!

you can see my work for AT&T HERE

and for RWLV HERE

https://www.digitalsignageexperience.com/digitalsignageexperien/dse-awards-0

Comment

Comment

LabO

I’m thrilled to finally be able to talk about LabO! I’m one of 4 co-directors who worked on this - 2 (Samuel Tétrault and Isabelle Chassé from 7doigts and Alexandre Michaels and me from Moment factory)


For those who have not had the chance to try this interactive social experiment created by MF collaboration with The 7 Fingers, LabO is an experimental workshop connecting participants through movement, music, technology and emotion. Still in its prototype form, this completely unique experience aims to connect strangers through dance and collaboration

Thanks to the artists and developers on the project and to all the test subjects, friends, coworkers and others, who have generously donated their time and feedback throughout this early development phase

IN - https://www.instagram.com/p/ClCTXuthW9x/?hl=fr
FB - https://www.facebook.com/MomentFactory/posts/10160718257420168

see more here: LabO

Comment

Comment

Finally - after many years, a small update

To be honest there is not much I can show publicly of my work at Moment Factory. One of the biggest and most amazing projects I have done I can’t show (but if you want to see it I can show it privately). However, I finally updated my website and added a multimedia section that covers some of the work that has been finished and that I can show (some of). Click here to get to the page.

Comment

Comment

Moment Factory

If it has been quiet here for a while it is because I am now working with Moment Factory on a couple of very big, long term projects. Lots of fun!

Comment

Comment

Nesquik Ideabox

Here's a little piece I've worked on most of the spring. It was a technically challenging job trying to get a paper look in such a compressed time period. It also was a lot of fun and interesting to create a fully digital kitchen and integrate it with the greenscreen.

Click on the image to see the final commercial.

Click on the image to see the final commercial.

Comment

Comment

Back at the Lab

I got invited back to the Lincoln Centers Directors Lab for a second year (they will take back 4-5 people from the previous year, but only once). It was truly an amazing experience yet again. This time I think I learned more about directing and how directing for Theatre is different than directing for film and TV.

Comment

Comment

At The Top, Burj Khalifa Sky

I can finally share this cool project that I worked on last spring in Dubai with GSMPRJ°. It is an interactive experience for the 148th floor of the Burj Khalifa. I will be uploading a making of at a later point, but this should give you an idea of what it is:


Comment