This logs my ongoing experiments and insights into integrating AI within the product design and development cycle.
Last updated on Mar 16, 2026
Evolving the Workflow
For our latest project, we tried to move away from the traditional “design-first” waterfall process. Since our Product Manager already had a clear blueprint from conceptualisation to interface details, I proposed a new AI workflow to the team to accelerate our velocity:
- Vibe Coding: PM and engineers use AI to build functional prototypes directly from requirement and references. By leveraging our existing design system, they build the core first to ensure the product works before final polishing finishes.
- Design: In parallel, designers focus on the master UI, navigation, flows and design patterns. This allows more time to address complex scenarios, secondary user flows and conduct competitive research on information architecture and experience direction.
- Iterate: Once the interface and flows are finalised, engineers re-iterate the AI-generated code to align it with the high-fidelity designs for final testing.
🚧 The Current Bottlenecks
While this approach is efficient, current technology still faces a significant hurdle: the fragmentation between “design-to-code” and “code-to-design”.
Currently, there is no coding environment or platform where designer and engineer can collaborate bidirectionally in a stable, secure way.
Apart from “design-to-code”, engineering also leads the “code-to-design” process, it is difficult to justify the extra effort required to reverse code back to editable Figma files. This lack of a “single source of truth” forces the team to rely heavily on the testing (or production) environment to understand the full scope of product logic and cases.
🧠 Design Lead Perspective: Embracing the Shift
- We should not be dogmatic about “design-first” or “code-first”. Instead, we must choose the approach that best fits the project’s specific background and constraints.
- There is no need to fear that AI will takeover, bypass or replace designers. Rather than resisting AI and new workflows, we should embrace them as an inevitable evolution. AI liberates designers from repetitive, manual tasks, allowing us to invest our time where it matters most: visual aesthetics, user experience and product strategy.
- Although the integration below is related to vibe-coding, I believe vibe-design is more important for designers.
Setting Up OpenCode × Poe API × Figma MCP
Figma MCP allows AI coding tools like Claude, Cursor and Windsurf to interact directly with your Figma designs. However, for users in regions like Hong Kong, accessing these AI platforms often require VPNs or phone verification, which carries a risk of account suspension.
To solve this, I use a Poe subscription to connect the Poe API to OpenCode—an open-source, terminal-first AI assistant, then deploy the Figma MCP locally.
Note: The following steps require some technical effort.
Step 1. Connect Poe API to OpenCode
- Ensure you have an active Poe subscription.
- Install the OpenCode desktop app, and follow the official guide to set up your environment.
Tip: If the opencode command isn’t recognised, install it via native Terminal using instructions from this article. - In OpenCode Settings > Providers > Custom Provider, enter your Poe API credentials:
- Provider ID: (e.g.) Poe_Claude
- Display Name: (e.g.) Poe
- Base URL: https://api.poe.com/v1
- API Key: (Your API key)
- Model(s): (e.g.) claude-sonnet-4.6 (or the latest available)

Step 2: Add Figma MCP to OpenCode
Since the OpenCode Desktop App GUI does not yet support MCP configuration, you must use the Terminal:

- Open OpenCode terminal or your native terminal and run: opencode mcp add
- Fill in the details:
- MCP Server name: figma
- MCP Server type: Remote (select “Remote” even for https local setups)
- MCP Server URL: http://127.0.0.1:3845/mcp (This URL appears when your first configure in Figma Dev Mode)
- In Figma Dev Mode, activate the Figma MCP server function.
- Restart your terminal. You should see Figma MCP is ready on the status menu.


Testing the Integration

I tried to transform a Figma design directly into production-ready code with a simple prompt:
Implement this design to production-ready code: {figma file link} using the design system in {figma file link}

The Results:
- Fidelity: Highly dependent on the model, but the generated page was remarkably close to the original design, complete with placeholder data.
- Context: Without detailed documentation and context, the AI occasionally misinterpreted data or requirements.
- Design System: The AI/MCP can reference variable collections in Figma, but you must explicitly name the specific “collection mode” in your prompt if you have multiple modes.
- Code: Achieving code that matches a specific front-end framework still requires iterative prompting.
Right now, I am also exploring the Southleft’s Figma Console MCP. So far, I haven’t found a way to use it in OpenCode, but it can run in Cursor.
