A new OpenAI tool converts text to 3D models.

A new OpenAI tool converts text to 3D models.

It’s unusual for an innovation to make waves in both the art and artificial intelligence communities, but OpenAI did just that with the release of the DALL-E image generator. Simply enter a description, and DALL-E will bring it to life. Point-new E’s algorithm has a similar pitch, but instead of producing a 2D image, it generates a 3D model of your description.

Point-E is made up of two different AI models, rather than going straight from text to 3D mesh. The prompt is first processed by a text-to-image AI to create a standard 2D image.The flat rendering is then converted into a 3D model by an image-to-3D AI. So, if you asked Point-E to draw a traffic cone, he’d start with a striped triangle. It is up to the second AI to recognize that traffic cones are cones and generate the correct 3D shape.

Point E samples

This isn’t an entirely new concept; Google has a tool called DreamFusion that can do something similar. DreamFusion, on the other hand, was designed to run on a machine with four of Google’s custom TPU v4 AI processors, and it takes 90 minutes for that hardware to generate an image. With DreamFusion, you’re looking at multiple hours of GPU time per image. Point-E is much faster, and it can run on a single GPU computer.

Many of us have had a good time telling DALL-E to make bizarre renderings, but Point-E isn’t quite ready for that kind of instant AI gratification. According to the study, this is the first step in foundational technology that could one day be as quick and easy as DALL-E. According to OpenAI, Point-E is still far from commercial 3D modeling — the results are more akin to a cloud of points.

Point-E models turns to a mesh.

The Point-E models, when smoothed and processed, can produce a passable representation of a real object, as seen above. The true breakthrough here is efficiency, which OpenAI claims is “one to two orders of magnitude faster” than existing systems. Perhaps, in the future, Point-E will infiltrate the domain of 3D modeling in the same way that DALL-E has in art. Adobe recently decided to allow AI-generated art in its stock image library, which did not sit well with all artists.

If you want to play around with Point-E, the source code is all available on GitHub. To make it work, you’ll need Python as well as some experience with programming and command line tools. However, because of the lower hardware requirements, it is more accessible than DreamFusion.

Share this post

About the author

Leave a Reply

Your email address will not be published. Required fields are marked *