Gubatenkov.dev

How to make 3D AI Cup Configurator on Three.js and React.js [Part 1]

Avatar of Slava Gubatenko, author of the post
Slava Gubatenko Animated direct message icon

12 min read

Final app that we’re going to develop: cup-configurator.vercel.app

Introduction

This series of articles assumes a basic knowledge of Git, React.js, Three.js and Typescript. And it will deal with the topic of 3D AI configurators on React.js, Next.js and Three.js. We will write a feature-rich app on Typescript with a small caveat that instead of the typical architecture for Three.js applications, we will build our AI configurator using the component-based approach offered by the React library and apply a React renderer for Three.js. In this way, we will simulate a development environment as close to the real world as possible, while simplifying the development of the application by taking advantage of packages such as Typescript, react-three-fiber and Next.js version 14.1.0.

Advantages of 3D web configurators

But before we dive in to the development of a 3D AI cup configurator, it is useful to understand what a 3D configurator is and what their advantages are using the example of an online store.

Let’s imagine that a coffee lover wants to buy a suitable cup for his collection in an online store. Upon opening the online store page with the product, instead of viewing static images, he is immersed in the 3D world of cups, where he can rotate, zoom and customize different attributes of the cup, including color, text and images. Do you think a user would be more likely to make a purchase in such a store? Rhetorical question.

Clearly, 3D configurators benefit both customers and businesses by improving the user experience, giving customers interactive ways to explore products, allowing them to customize their merchandise and increasing the likelihood of purchase, while strengthening brand relationships, not to mention increasing key business metrics such as conversion and others.

Another great advantage of web configurators is their unprecedented accessibility, allowing anyone with a mobile phone, tablet or even a cheap computer connected to the internet to work with them. This accessibility simplifies the immersion process, removing barriers to entry and making it possible to deliver key product values to users from different socio-economic backgrounds.

In addition, the compatibility of these configurators with mobile devices enhances usability, allowing users to have an immersive experience on the go. Whether at home, on the road or on a break from work, customers can easily access key features of 3D configurators from their portable devices without downloading any special software, which is certainly very convenient. Unlike traditional software solutions or complex design programs, which often require specialised hardware or significant financial investment, 3D web-based configurators are an order of magnitude cheaper to develop and also work effectively on widely available devices that have an internet connection and an installed browser, eliminating the need for high-tech hardware or expensive PC installations.

Core web technologies for creating a 3D AI Cup Configurator

Creating an fully functional 3D AI cup configurator will require two powerful libraries - React.js, which will be used in the context of Next.js and of course Three.js and its renderer for React applications. You will also need the Fabric.js package for advanced 2D-canvas management. The other dependencies are secondary and can easily be replaced if desired, but this is not recommended.

The main functionality that will be implemented in the application is:

  1. Interactive representation of a GLB cup model on a Canvas3D
  2. Interactive representation of the cup print on Canvas2D, with real-time rendering of the result on the cup model in Canvas3D
  3. Actions panel for advanced Canvas2D management, including downloading the finished print in PNG format
  4. A Settings panel for Canvas2D that adds the ability for the user to:
    1. Add text to the print and customize it
    2. Upload their own images and use ready-made ones
    3. Use predefined patterns as a background
    4. Use predefined images as background
    5. Add SVG images as geometric primitives
    6. Create AI images using a third-party service
  5. Switch dark/light themes

Basic technologies for developing functionality:

  1. React.js in the context of Next.js, including App Router, Layouts, RCC, RSC, Serverless Functions
  2. Typescript
  3. Three.js and react-three-fiber
  4. Fabric.js
  5. Zustand as client state manager
  6. Responsive and beautiful Radix + TailwindCSS components, for a modern UI
  7. Browser API for uploading and downloading images
  8. Conditionally free API of Eden AI service as artificial intelligence provider for image generation.

It may seem difficult to master all these technologies at once, but don’t worry, I will explain in detail the basic steps of creating an advanced 3D AI cup configurator in React.js, Next.js and Three.js. If you follow my instructions, creating a 3D AI cup customizer won’t be too difficult. If there are still difficulties, you can always switch to the main project branch and further explore the source code of the finished application and understand what works and how it works.

Step-by-step plan for developing a 3D AI Cup Configurator

I propose to familiarize ourselves with the algorithm, which consists of 5 basic steps, following which we will create our 3D AI cup configurator on React.js, Next.js and Three.js:

  1. Cloning a remote repository and installing dependencies of a prebuilt template from the starter branch.
  2. Adding a default background image and the required logic to Canvas2D to display a custom print and implementing a panel with additional functionality for Canvas2D:
    1. Clearing the background
    2. Removing Fabric.js objects such as text or images
    3. Controlling the order in which layers are overlaid
    4. Downloading the final print in PNG format
  3. Adding logic to the 3D Canvas to display a GLB model of the cup and a custom print on it based on information from the 2D Canvas
  4. Implementing functionality to personalize a 3D cup. Adding logic to the Settings Panel component and thereby enabling the user to:
    1. Adding text to the print and customize it
    2. Uploading own images and use predefined ones
    3. Using patterns as backgrounds
    4. Using images as backgrounds
    5. Adding SVG images on the canvas
    6. Generating images using Eden AI API
  5. Publishing source code on Github, setting up previews and automatic deploys to Vercel

Beginning of implementation

While writing this article I will be using Visual Studio Code and will assume that your IDE and working environment is already set up and ready for the first step.

To focus on developing the core functionality I have prepared a starter project template which can be found in the starter branch in the github project repository .

In order to successfully run the commands below, you must already have Git and Node.js installed on your computer.

The starter template already includes: basic configurations for tailwind, next, typescript, prettier and eslint, which you can extend as you wish. Also included preloaded fonts, default background image for Canvas2D component, GLB cup model, Draco loader for optimized loading of GLB files, basic application page layout including header with theme switcher, scene rendered with Canvas3D component, where the environment is minimally configured, in which the GLB cup model will be rendered later, Canvas2D component and a ready-made Settings panel component, which elements can be switched using the corresponding navigation buttons, but which lacks key logic.

Next, I suggest that you add the necessary logic responsible for the key functionality of the application to the components step by step on your own and, in case you fail, return to the code snippets in this article or to the final code in the main branch of the project repository.

The main branch of the project repository contains already deployed code, which can be further explored if there are difficulties and to understand how it works!

Now, clone the repository to the selected directory on your computer and switch to the starter branch of the project tree by entering the following commands in the terminal of your IDE:

text
git clone https://github.com/gubatenkov/cup-configurator.git
git switch starter

Next, install the project dependencies and start the local server for development by entering next commands:

text
npm i && npm run dev

If errors have already occurred at this stage, fix them and return. On the local server, the browser should show the following:

Browser screen with successfully launched app on localhost
Browser screen with successfully launched app on localhost

I propose to start with the description of the type of data store, and then move on to its implementation. Here is how I see the store:

lib/store.ts
ts
type Store = {
setTextSettings: (partialTextSettings: PartialTextSettings) => void
setFabricCanvas: (canvas: FabricCanvas) => void
fabricCanvas: FabricCanvas | null
panels: Record<
'background' | 'geometry' | 'pattern' | 'image' | 'text',
Panel
>
}

This is not necessary, but for convenience I suggest putting our store in the /lib directory. And make /lib/store.ts look like this:

lib/store.ts
ts
import type { PartialTextSettings, FabricCanvas } from '@/types'
import { create } from 'zustand'
type Panel = {
data: Record<string, any>
label: string
path: string
id: number
}
type Store = {
setTextSettings: (partialTextSettings: PartialTextSettings) => void
setFabricCanvas: (canvas: FabricCanvas) => void
fabricCanvas: FabricCanvas | null
panels: Record<
'background' | 'geometry' | 'pattern' | 'image' | 'text',
Panel
>
}
export const useStore = create<Store>((set) => ({
panels: {
text: {
data: {
// Let's define default text panel settings
textSettings: {
fontStyle: 'normal' as 'oblique' | 'normal' | 'italic' | '',
backgroundColor: '#ff000000',
text: 'Start edit me!',
fontWeight: 'normal',
fontFamily: 'Lato',
textAlign: 'left',
originX: 'center',
originY: 'center',
underline: false,
fill: '#ffffff',
lineHeight: 1,
fontSize: 36,
width: 200,
} satisfies PartialTextSettings,
},
label: 'Text',
path: 'text',
id: 1,
},
background: {
label: 'Backgrounds',
path: 'background',
/* This is not our case, but potentially other panels
* may have some data */
data: {},
id: 5,
},
geometry: {
label: 'Geometry',
path: 'geometry',
data: {},
id: 3,
},
pattern: {
label: 'Patterns',
path: 'pattern',
data: {},
id: 4,
},
image: {
label: 'Images',
path: 'image',
data: {},
id: 2,
},
},
setTextSettings: (partialTextSettings: PartialTextSettings) =>
/* This is how Zustand and Redux normally works, but you can use Immer
* to reduce the amout of code */
set((prevState) => ({
panels: {
...prevState.panels,
text: {
...prevState.panels['text'],
data: {
...prevState.panels['text'].data,
textSettings: {
...prevState.panels['text'].data.textSettings,
/* Use structuredClone to make deep copy of the object;
* Note that you may need polyfil for compatibility
* in older browsers */
...structuredClone(partialTextSettings),
},
},
},
},
})),
setFabricCanvas: (fabricCanvas) => set({ fabricCanvas }),
// Initially null
fabricCanvas: null,
}))

And types FabricCanvas and PartialTextSettings from @/types:

lib/types.ts
ts
export type FabricCanvas = fabric.Canvas & {
lowerCanvasEl: HTMLCanvasElement
wrapperEl: HTMLDivElement
}
export type PartialTextSettings = Partial<TextSettings>

Then, following the plan, we create the useInitFabricCanvas hook to initialize fabric canvas in the /lib/hooks.ts directory and the useFabricCanvas hook in same directory:

lib/hooks.ts
ts
'use client'
import type { FabricCanvas } from '@/types'
import { useLayoutEffect, useCallback, ElementRef, useRef } from 'react'
import { fabric } from 'fabric'
import { useStore } from './store'
export const useInitFabricCanvas = () => {
const containerRef = useRef<ElementRef<'div'> | null>(null)
const { setCanvas, canvas } = useStore(
({ setFabricCanvas, fabricCanvas }) => ({
setCanvas: setFabricCanvas,
canvas: fabricCanvas,
})
)
/* In useLayoutEffect we can initialize fabric canvas
* before the browser repaints the screen */
useLayoutEffect(() => {
const initCanvas = () => {
const container = containerRef.current
// Make sure we have a container in which to render the canvas
if (!container) return
const { height, width } = container.getBoundingClientRect()
// Create canvas with default options
const fabricCanvas = new fabric.Canvas(document.createElement('canvas'), {
backgroundColor: '#f2f2f2',
height,
width,
}) as FabricCanvas
// Set a link for the canvas object to the store
setCanvas(fabricCanvas)
// Render both - lower and upper canvases
fabricCanvas.renderAll()
// Append canvas node to the provided wrapper element
container.appendChild(fabricCanvas.wrapperEl)
}
// Make sure that the fabric canvas has been initialized only once
if (!canvas) initCanvas()
}, [canvas, setCanvas])
return containerRef
}
export const useFabricCanvas = () => {
const { canvas } = useStore(({ fabricCanvas }) => ({
canvas: fabricCanvas,
}))
const isMounted = Boolean(canvas)
const unlockCanvasTextboxes = useCallback(() => {
if (!canvas) return
const textboxes = canvas.getObjects('textbox') as fabric.Textbox[]
textboxes.forEach((tb) =>
tb.set({
selectable: true,
editable: true,
})
)
}, [canvas])
const lockCanvasTextboxes = useCallback(() => {
if (!canvas) return
const textboxes = canvas.getObjects('textbox') as fabric.Textbox[]
textboxes.forEach((tb) => {
tb.exitEditing()
tb.set({
...tb,
selectable: false,
editable: false,
selected: false,
})
})
canvas.discardActiveObject().renderAll()
}, [canvas])
const setCanvasBackgroundByUrl = useCallback(
(imageUrl: `${string}.${'png' | 'jpg'}`) => {
fabric.Image.fromURL(imageUrl as string, (image) => {
if (!canvas) return
canvas.setBackgroundImage(image, canvas.renderAll.bind(canvas), {
scaleY: (canvas.height ?? 1) / (image.height ?? 1),
scaleX: (canvas.width ?? 1) / (image.width ?? 1),
})
})
},
[canvas]
)
return {
setCanvasBackgroundByUrl,
unlockCanvasTextboxes,
lockCanvasTextboxes,
isMounted,
canvas,
} as const
}

Then the component /components/Configurator/Canvas2D/index.ts in which we will call these hooks to initialize Canvas2D and install the prepared image will look like this:

../Canvas2D/index.ts
tsx
'use client'
import { useInitFabricCanvas, useFabricCanvas } from '@/lib/hooks'
import { usePathname } from 'next/navigation'
import { useStore } from '@/lib/store'
import { useEffect } from 'react'
import { cn } from '@/lib/utils'
import Actions from './Actions'
export default function Canvas2D() {
const textPanelPathname = useStore((state) => state.panels.text.path)
const containerRef = useInitFabricCanvas()
const currentPathname = usePathname()
const {
setCanvasBackgroundByUrl,
unlockCanvasTextboxes,
lockCanvasTextboxes,
isMounted,
} = useFabricCanvas()
// Set default canvas background after Canvas2D will mount
useEffect(() => {
setCanvasBackgroundByUrl(
'/assets/backgrounds/default-cup-configuration.png'
)
}, [setCanvasBackgroundByUrl])
/* Keep track of the current path and lock a text panel when
* the user leaves it to prevent unnecessary text changes when
* the user is on other panels */
useEffect(() => {
currentPathname === `/${textPanelPathname}`
? unlockCanvasTextboxes()
: lockCanvasTextboxes()
}, [
unlockCanvasTextboxes,
lockCanvasTextboxes,
textPanelPathname,
currentPathname,
])
return (
<>
<div className="absolute left-4 top-4 z-50 flex items-center gap-2">
<Actions />
</div>
<div
className={cn(
'h-full w-full overflow-hidden border-none',
'opacity-0 transition-opacity duration-1000',
{
'opacity-100': isMounted,
}
)}
// Don't forget to add the ref
ref={containerRef}
/>
</>
)
}

This is what we should have in the browser in the current step:

Default background image for Canvas2D component
Default background image for Canvas2D component

It looks great! It’s time for a brief summary of what’s been done.

Summary

In this article, we started the process of creating a 3D AI cup configurator using Three.js, Fabric.js, React.js, Next.js and Zustand - the core technologies. We installed the project dependencies, ran it in a browser and did basic configuration. Also implemented a store to manage the application state, wrote some functions to initialise the fabric canvas in the Canvas2D component and render the default background image. In the next article, you will continue to develop the application by following the steps described in the plan in this article and even extend its functionality to better understand 3D AI web customiser development.