Hi maintainers,
First off, thanks for the great work on the figmatocode repository! It's a very useful tool.
I've noticed that the default mode often generates layouts using position: absolute. While this can be accurate to the pixel, it often requires significant refactoring to create responsive and maintainable components, especially when dealing with complex designs. I've also experimented with the 'auto layout' mode, but I've found it doesn't always produce the desired results for many of the designs I work with. Manually adapting the absolute-positioned code involves handling numerous complex edge cases and significantly increases development complexity.
Furthermore, many standalone AI design-to-code tools, possibly relying primarily on visual analysis, often don't achieve the same pixel-level accuracy for styles compared to figmatocode's direct Figma API integration which precisely captures attributes.
After experimenting, I found a practice that works surprisingly well by combining figmatocode's precise attribute extraction (via Figma API) with the capabilities of generative AI. The workflow is essentially:
- Take a screenshot of the target design/component in Figma.
- Generate the code using the
figmatocode extension to get the precise attribute values.
- Provide both the screenshot and the
figmatocode-generated code to an LLM (I used Claude 3.7 Sonnet for testing).
- Use a prompt to instruct the LLM to:
- Combine the visual information from the screenshot with the precise attributes from the original code.
- Replace
position: absolute layouts with a flexible alternative (like Flexbox).
- Keep all other styles exactly as they were generated by
figmatocode.
- Infer approximate
padding/margin/gap/space values by analyzing the left/top differences in the original absolute positioning.
- (Optional) Supplement the prompt with additional requirements or hints, such as considering responsive design for different devices, using
Tailwind CSS, adapting for Next.js, or suggesting it ignore the content of very large SVG components to keep the output concise.
The results from this hybrid approach – leveraging figmatocode's precision, visual context, and the LLM's reasoning – have been remarkably effective. Comparing it to many SaaS tools I've tried, I believe this method often yields better outcomes.
would you consider integrating an AI-powered conversion capability directly into the figmatocode repository? This could perhaps be an optional "AI Convert" mode or a post-processing step after the initial code generation.
Thank you for considering this suggestion and for your work on this valuable tool.
Hi maintainers,
First off, thanks for the great work on the
figmatocoderepository! It's a very useful tool.I've noticed that the default mode often generates layouts using
position: absolute. While this can be accurate to the pixel, it often requires significant refactoring to create responsive and maintainable components, especially when dealing with complex designs. I've also experimented with the 'auto layout' mode, but I've found it doesn't always produce the desired results for many of the designs I work with. Manually adapting the absolute-positioned code involves handling numerous complex edge cases and significantly increases development complexity.Furthermore, many standalone AI design-to-code tools, possibly relying primarily on visual analysis, often don't achieve the same pixel-level accuracy for styles compared to
figmatocode's direct Figma API integration which precisely captures attributes.After experimenting, I found a practice that works surprisingly well by combining
figmatocode's precise attribute extraction (via Figma API) with the capabilities of generative AI. The workflow is essentially:figmatocodeextension to get the precise attribute values.figmatocode-generated code to an LLM (I used Claude 3.7 Sonnet for testing).position: absolutelayouts with a flexible alternative (like Flexbox).figmatocode.padding/margin/gap/spacevalues by analyzing theleft/topdifferences in the original absolute positioning.Tailwind CSS, adapting forNext.js, or suggesting it ignore the content of very largeSVGcomponents to keep the output concise.The results from this hybrid approach – leveraging
figmatocode's precision, visual context, and the LLM's reasoning – have been remarkably effective. Comparing it to many SaaS tools I've tried, I believe this method often yields better outcomes.would you consider integrating an AI-powered conversion capability directly into the
figmatocoderepository? This could perhaps be an optional "AI Convert" mode or a post-processing step after the initial code generation.Thank you for considering this suggestion and for your work on this valuable tool.