Apple researchers developed a method to train an open-source large language model, StarChat-Beta, to generate SwiftUI user interface code by creating a large synthetic dataset and iteratively refining it through automated feedback.
The research, detailed in the paper “UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback,” addresses challenges faced by large language models (LLMs) in generating syntactically correct and well-designed user interface (UI) code. LLMs exhibit proficiency in various writing tasks, including creative writing and general coding, but encounter difficulties with UI code generation. This limitation stems from a scarcity of UI code examples within the datasets used for training, even in curated or manually authored fine-tuning datasets, where UI code can constitute less than one percent of the total examples.
To overcome this data sparsity, researchers initiated their approach using StarChat-Beta, an open-source LLM specifically designed for coding tasks. They provided StarChat-Beta with a collection of UI descriptions, instructing the model to generate a substantial synthetic dataset comprising SwiftUI programs derived from these descriptions. This synthetic generation phase aimed to produce a broad initial set of UI code examples.
Following the generation of code, each program underwent a two-stage validation process. First, the code was run through a Swift compiler to verify its executable status. Second, GPT-4V, a vision-language model, analyzed the compiled interface, comparing it against the original UI description to assess fidelity and correctness.
Outputs that failed to compile, were deemed irrelevant to the description, or were duplicates were systematically discarded. The remaining outputs, having met the compilation and relevance criteria, formed a high-quality training set. This refined dataset was subsequently used to fine-tune the StarChat-Beta model.
The researchers implemented an iterative refinement process, repeating the entire generation and validation cycle multiple times. Each iteration demonstrated an improvement in the model’s ability to generate SwiftUI code, which, in turn, contributed to the creation of even cleaner and more accurate datasets for subsequent fine-tuning rounds. This continuous feedback loop was central to the model’s progressive enhancement.
After completing five full rounds of this iterative process, the researchers had amassed approximately 996,000 distinct SwiftUI programs. This extensive dataset was used to train the final model, named UICoder. Tests conducted on UICoder indicated that it consistently compiled and produced interfaces that aligned significantly closer to the original prompts compared to the initial StarChat-Beta model. Automated metrics and human evaluations both confirmed UICoder’s substantial outperformance of the base StarChat-Beta model in generating SwiftUI code.
UICoder also demonstrated capabilities comparable to GPT-4 in terms of overall code quality, and notably surpassed GPT-4 in its compilation success rate. A significant finding from the study was the accidental exclusion of SwiftUI code from StarChat-Beta’s initial training data. StarChat-Beta was primarily trained on three corpora: TheStack, a large dataset of permissively licensed code repositories featuring 250 billion tokens; crawled web pages; and OpenAssistant-Guanaco, a smaller instruction-tuning dataset.
The researchers determined that Swift code repositories were inadvertently excluded during the creation of TheStack dataset. Furthermore, manual inspection revealed that the OpenAssistant-Guanaco dataset contained only a single example of Swift code out of ten thousand entries in its response field. Researchers hypothesized that any Swift examples encountered by StarChat-Beta during its initial training likely originated from crawled web pages, which tend to be of lower quality and less structured than repository code.
This inadvertent exclusion implies that UICoder’s performance gains were not attributable to the re-processing of pre-existing SwiftUI examples from its base training, as there were virtually none. Instead, the improvements stemmed entirely from the self-generated, rigorously curated datasets developed through Apple’s automated feedback loop.
This outcome led the researchers to hypothesize that their method, while proven effective for implementing UIs using SwiftUI, possesses the potential to generalize to other programming languages and UI toolkits. The full study is accessible on arXiv.