We started this project about a year ago. We thought it would be easy. There was an event GOAT CONF ONE where we were doing a hackathon for the GENAi Photo booth. Chuck er… Renan Barbosa commented this was too easy. Boasting we should do the Jetracer from NVIDIA also known as Project Leatherback. It is a really cool concept. An “Ai RC Racer” built around the Jetston Nano. The car hardware was built and assembled by the event. but the Simulation to Reality (Sim2Real) was still being developed. And still is…





We did the event. It was great for what it was, a gathering of like minded individuals talking about cool stuff. But we aren’t done yet and we had to kind of finish a few other things in the mean-time. The GENAI Photobooth ended up with 2-3 subsequent hackathons and many events put on and we also had it at Makerfaire.
An Interactive Generative Ai Photobooth is what?
- Generate a background image based on request or canned theme response
- We take a picture and remove the background, without using a green screen
- We put you with that background and tag it with the event name / frame
You get QR code to download it[WONTFIX]





We actually started “vibe” coding this on the side to prepare for next hackathon. I picked this project because I knew it was possible in short amount of time. (ahem) Turns out vibe coding does work and it is only getting better. It is a Jupyter Notebook so it isn’t pretty and it isn’t finished but it works. It is a great teaching project.
We have to haul the 4090 machine round though. 50lbs to ship these Docker containers. It was fun but, now it is over. To get quality pictures you still need professional lighting and generally knowing what you are doing. This technology will exist in near realtime soon on your phone. Back to the workstations with the 4090s.
Modern Workstations
Between fighting with GPU drivers and out of memory problems, we built a few of these systems as backups because they are hard to source. 4090s have been sold out for over 6 months. 3090s are available as the best bang for your buck right now. All of our development is against NVIDIA GPUs. This is a practical choice as all GPUs in the market are sub-standard in tooling compared to NVIDIA’s for AI training/inference definitely and probably graphics performance.
With its 24GB of RAM and over 10,000 CUDA cores, you can run an LLM or two and maybe do image and video generation flows in a reasonable amount of time. This is also a great system to run IsaacSim and to train robots in a simulator.
At the conference we had to provision cloud instances for everyone as each workstation costs upwards of $4K. We had the computers in the cloud with Infinite Compute‘s donation of credits to rent them by the hour. Having a local workstation is a must when doing IsaacSim work in my experience.
Racer Update May
Moving the lab 3 times has been three too many. But we landed at the college while we finish up our last semester. Plan is still to graduate in fall but hopefully the bulk of the work will be done in the next 60 days. The one year point of when we wanted to be racing. The hardware is all there and the reinforcement-learning training of the models is working. Still trying to figure out vision.


What’s Next?
With the RF Kill Switch now designed and tested we need to fabricate a few more cables and assemble the car “finally”. With this safety feature we will begin with teleoperation again which is just publishing commands over wifi to the car. We never ran the car not on blocks because we didn’t have the kill switch. We hope to start with teleop next week.

Sim2Real
Currently the flow is to use IsaacLab and SKRL a python library to train 2 models for a reinforcement learning policy to be trained. Once a policy is trained it can be inferred in the simulator. In the case of the above video it is simple way-point navigation. The next step is to transfer this model from the workstation to the Jetson Nano.
This is where the fun begins. ONNX, TensorRT. We are still stuck on vision. Stay Tuned!

