Skip to content Skip to sidebar Skip to footer

OpenAI’s DALL-E 2 text-to-image system is remarkably powerful, and only recently has it been made available to customers for commercial use.

However, some early adopters claim they are already testing the system for various business use cases, waiting for the day when DALL-E 2 is stable enough to deploy in production. This is despite the fact that there are currently technical limitations, such as a lack of volume licensing and an API.

Online tailoring service Stitch Fix uses recommendation algorithms to help customers find their perfect outfit, and the company has reported experimenting with DALL-E 2 to better visualize its products according to criteria like color, fabric, and style. In the pilot, DALL-E 2 was used to generate images of items that customers requested, such as a “high-rise, red, stretchy, skinny jean,” so that a stylist could find an item that was similar in stock.

DALL-E 2 helps us surface the most informative characteristics of a product in a visual way, ultimately helping stylists find the perfect item that matches what a client has requested in their written feedback.

Stitch Fix

It’s no secret that DALL-E 2 has some peculiarities that have some early enterprise adopters on edge. In a recent blog post, Eric Silberstein, vice president of data science at e-commerce startup Klaviyo, details his conflicted feelings about the system as a promotional tool.

He says that DALL-E 2 human models often have unrealistic facial expressions and disproportionate muscles and joints, and that the program doesn’t always fully grasp instructions. For example, when Silberstein commanded DALL-E 2 to render a candle sitting on a wooden table against a gray background, the robot occasionally erased the candle’s lid and blended it into the desk, or it rendered a rim around the candle that didn’t match the rest of the table.

Silberstein DALL-E 2 experiment
Silberstein DALL-E 2 experiment

The Cosmopolitan magazine’s editors reached the same conclusion when they collaborated with digital artist Karen X. Cheng to design a cover for the magazine in DALL-E 2. The editors noted that DALL-E 2’s limitations as an art generator are demonstrated by the fact that the final cover required very specific prompting from Cheng.

However, there are times when the AI quirks actually help. Heinz used DALL-E 2 to generate a series of ketchup bottle images for its Draw Ketchup campaign by feeding it a variety of natural language terms, including “ketchup,” “ketchup art,” “fuzzy ketchup,” “ketchup in space,” and “ketchup renaissance.” Heinz curated fan-submitted prompts and distributed them across its social media platforms.

DALL-E 2 Heinz bottles
DALL-E 2 Heinz bottles

With AI imagery dominating news and social feeds, we saw a natural opportunity to extend our ‘Draw Ketchup’ campaign; rooted in the insight that Heinz is synonymous with the word ketchup — to test this theory in the AI space.

Jacqueline Chao, Heinz

Clearly, campaigns fueled by DALL-E 2 can be successful when dealing with the topic of artificial intelligence. However, several DALL-E 2 business users claim to have used the system to create assets that don’t show the telltale signs of AI constraints.

Software engineer Jacob Martin used DALL-E 2 to design a logo for the open source project, OctoSQL. Martin paid about $30, which is about the going rate for logo design services on Fiverr, and received an octopus cartoon that looks human-illustrated.

DALL-E 2 OctoSQL Logo
DALL-E 2 OctoSQL Logo

Motivated by the hypothesis that posts containing images receive significantly more engagement than those without, Don McKenzie, the head of design at Deephaven, experimented with applying the system to generate thumbnails on the company blog.

McKenzie claims that after spending a weekend and $45 in credits, he was able to replace around 100 blog posts with images generated by DALL-E 2.

Leave a comment

> Newsletter <
Interested in Tech News and more?

Subscribe