I didn't need this, but I used AI to 3D print a tiny figurine of myself – here's how

David Gewirtz/ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • AI can turn a single photo into a printable 3D figurine.
  • Multiple AI systems quietly power each step of the process.
  • Consumer tools make this workflow accessible at home.

Writing for ZDNET has many benefits. One of my favorites is that I get to try out new technologies and report back to you about how they work. Most of the time, those technologies have some productive use. But sometimes, I get to try something out that can’t really be justified in any way, other than that it’s cool.

That’s today’s project. In this article, I’ll show you how I started with a picture of me, used some intermediate AI, and turned it into a physical 3D plastic me figurine. Do I need a me figurine? No. Is it cool? Yeah. Does it show off another AI capability? Yep.

Also: Why this $190 AI-designed 3D printed sneaker surprised me

I’ll be honest. I didn’t expect my editor to sign off on this pitch. But since she did, let’s have some fun with it. If you have a 3D printer, you can follow the same steps and turn yourself into a plastic figurine, as well.

Come to ZDNET. We’ll teach you stuff you never expected, but now can’t live without.

AI drone-assisted starting picture

I decided to start with the following image. We’re in a gray winter here in Oregon right now. The sunny outdoor nature of this image cheers me up.

David Gewirtz/ZDNET

This isn’t just any selfie. The photo was taken by the DJI Neo drone in autonomous mode, flying in front of me as I was walking and recording a YouTube video. The drone was able to maintain stable flight, a stable distance in front of me, and fly backward, all because of its built-in machine vision and AI capabilities.

Also: Own a DJI drone? Here’s how the FCC ban affects you today

The picture I chose was just one frame from that 20-minute narrative video.

Refine the image

To prepare the image for use by the tool that would turn it into a 3D model, I wanted to make sure the representation of the real me was suitable. To do this, I pulled the image into ChatGPT and used the new GPT 5.2 Images tool.

Also: I tested the new ChatGPT Images – it’s a stunning improvement, and enormously fun

First, I prompted ChatGPT to give me legs and remove the background. I instructed it to “remove the background, put the man on a white background, complete his legs with pants that go down to his ankles and black sneakers. Make sure the camera is directly in front (this image has the camera slightly higher than the man). Show him standing still, not walking.”

That gave me the first image shown below, on the left.

Screenshot by David Gewirtz/ZDNET

Next, I was concerned that the red illustration on the front of the T-shirt would cause trouble for the eventual 3D print. I also didn’t want it to print the watch or the little microphone. So I told ChatGPT, “Remove watch, microphone, and logo on front of shirt. Keep face the same.”

Also: The best AI image generators of 2026: There’s only one clear winner now

The result is the middle image above. Unfortunately, I thought the shirt was too plain, so I had ChatGPT overlay the logo of my YouTube channel, which is a robot in a 3D printer. I told ChatGPT, “Add logo to the front of shirt. Keep face the same.”

That became the image on the far right. Note that I explicitly told ChatGPT to keep the face the same. I’ve noticed that if I don’t explicitly tell ChatGPT to keep the face the same, it takes unfortunate liberties.

The image on the far right is the image I chose to turn into a 3D model.

Image to model

There are a number of services out there that will turn a photo into a 3D model. The one I used is provided for free by the maker of some 3D printers I own, Bambu Lab. I own the Bambu Lab X1 Carbon and the new, and larger, Bambu Lab H2D. Since Bambu had the software and would generate a model tuned for my printer, that seemed like a no-brainer decision.

Also: Is that an AI image? 6 telltale signs it’s a fake – and my favorite free detectors

I started by navigating my browser to Bambu’s MakerLab service. I waited for the scrolling banner to bring up PrintU and clicked on it.

Screenshot by David Gewirtz/ZDNET

I can see why Bambu calls this feature PrintU, because you can use it to print you. But here in the US, we usually append the letter U to a university, so the feature reads more like Print University, implying some sort of training tool, rather than PrintYou, which might imply you were going to make a tiny little you, or, in this case, me.

Also: This OS quietly powers all AI – and most future IT jobs, too

In any case, once in the PrintU interface, I clicked the plus button to create a new project. I chose Image Pose from the available options. I waited a few minutes for the site to create a two-dimensional, cartoonish version of the original image.

Screenshot by David Gewirtz/ZDNET

The face wasn’t exactly mine, but it was a fair enough caricature. I think it did a great job of reproducing my body, and I was super impressed with how the vest was an almost perfect reproduction. The middle image is a computer graphic version of my caricature. The right image, generated after clicking Generate 3D Model, is an actual print-ready 3D model.

It’s here that Bambu Lab becomes a little more confusing. To modify the model, you need to use MakerLab credits. When you create an account on MakerLab, you’re given a bunch of credits. Then, if you post objects to the site, you can get some more. I haven’t found any way you can buy credits, but I’m sure that’s a feature somewhere.

Screenshot by David Gewirtz/ZDNET

In any case, I had 170 credits. Tweaking the mini-me only took 10 credits, so I went ahead. I let the PrintU AI process my photo and generate a 3D model.

Choosing colors and exporting

Most low-end 3D printers only support printing one color at a time. But over the past year or so, there’s been a surge in affordable multi-color 3D printers that support up to four colors. I was planning on using my H2D, which has two four-color filament sequencers, so I could use up to eight colors.

Also: 5 ways to use AI to modernize your legacy systems

Right out of the design process, PrintU assigned my image nine colors. I didn’t really like how they were distributed.

Screenshot by David Gewirtz/ZDNET

Fortunately, you can replace colors at will. My intention was to use four colors, mostly because I didn’t see any strong need for more than four. In PrintU, you can add and remove colors, so that’s what I did. That left me with a model I could export.

Preparing for 3D printing

3D printing, particularly filament-based 3D printing, is the process of extruding layer after layer of molten plastic, one layer on top of the next. There’s a program called a “slicer” that converts a three-dimensional model into slices, the layers, and then generates the g-code instructions that tell the printer how to operate.

More modern slicers, particularly the one used by Bambu Lab, support a variety of extra features. The one most appropriate to this project is the ability to “paint” colors, which the slicer then converts into machine instructions.

Also: The best free AI courses and certificates for upskilling in 2026 – and I’ve tried them all

The 3D printer file that PrintU generated was quite good. The only thing I didn’t like was the absence of eyeballs. You can see that on the image on the left. So I opened up the paint tool and just dotted two black eyeballs into the image, which you can see on the right. I debated making the background of the eyes white, but that didn’t seem necessary for a figure that wasn’t going to stand more than about 18 inches tall.

Screenshot by David Gewirtz/ZDNET

When printing layers of molten filament, you have to take gravity into account. If you try to string melty filament over an empty space, it will naturally sag or drip. The way slicers get around this is by producing support structures to hold up the overhanging elements. 

Once the print is finished, the supports are removed. In this image, you can see both the supports, on the outside of the image, and an infill pattern. The infill is also designed to hold up layers that are printed above, again because of the droopy nature of melted plastic.

Screenshot by David Gewirtz/ZDNET

Because I have a printer that can support up to eight spools at once, I was able to add a special support interface material spool. This is a substance that is printed at the top of the support, just below the filament it’s tasked with holding up. What makes this special is that the support interface material doesn’t fuse with the figurine’s plastic, making it much easier to remove the supports.

Also: 10 tiny gadgets I never leave home without (and how they work)

With the Bambu Lab H2D, I not only used an extra spool of support material, I also included an extra spool of black filament. The H2D has the ability to automatically move from one spool to the next if it runs out of filament. Since about half of the project uses black filament, it was helpful to put an extra spool in the printer as a backup in case the first roll was used up.

It took about three days. The result looked like something out of Darth Vader’s medical lab.

David Gewirtz/ZDNET

With a little careful work, I was able to remove all of the supports and was left with the final figurine.

David Gewirtz/ZDNET

Let me know. Do you think it looks like the original image?

Lots of AI

To recap, AI contributed a lot to this project.

  • The original image was taken by a drone using AI for positioning, aerodynamics, and subject tracking.
  • ChatGPT’s image tool was used to remove the background, add legs and feet, remove the microphone, watch, and T-shirt pattern, and add a new logo to the shirt.
  • Bambu Lab’s image-to-3D tool converted the real-world photograph to a 2D cartoon, and then turned that 2D cartoon into a functional 3D model.
  • The printer’s slicer used algorithmic tricks to automatically place supports on the model, while the 3D printer itself used a range of AI-powered sensors to monitor and manage filament flow and deposition.

And I, humble as I am, provided the unmistakable me-ness that made the whole thing look good.

Also: I love Photoshop, but Canva’s free Affinity tools won me over (and saved me money)

What about you? Would you be willing to try to turn a photo of yourself into a 3D model or a physical print? Which part of this process intrigues you most, the AI image cleanup, the photo-to-model conversion, or the actual 3D printing? Do you see practical uses for tools like this, or is the fun factor enough? If you’ve experimented with similar AI or 3D printing workflows, what worked well, and what surprised you? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Artificial Intelligence