Skip to content

Using GPT-4 to Build a Prototype App: A 5-Step Guide and Key Takeaways

Conversations about Chat-GPT and GPT-4 are dominating the technology universe. At This One, we’re especially excited about these large language models (LLMs) for 2 main reasons:

  1. They could help power our general-purpose recommendation engine for life. In doing so, they could play a central role in our company mission: To help people discover things they truly love.
  2. They could help us move faster in our product discovery process, by allowing us to build prototypes and test possible ideas very quickly.

Last week, I used GPT-4 to prototype an idea and build it directly as a stand-alone iPhone app. I’ve never written a single line of Swift code before, but with GPT-4’s help, I built a working app from scratch and tested it with real-life test-users, all within about 8 hours. There were certainly some bumps along the way, but ultimately I was able to test (and validate!) a complex idea insanely quickly.

In this post, I’ll provide a 5-step guide that will help you do the same. I’ll also share 5 key takeaways that I learned along the way.

The Power of Rapid Prototyping

First, a question: Why spend time on rapid prototyping at all? When undertaking product discovery, it’s tempting to believe that testing complex ideas will require building complex products. But actually, that’s rarely true: In many cases, it’s possible to get a fairly strong signal about an idea without building very much at all. My favourite reference on this point is Testing Business Ideas - a must-read for anyone who spends their time exploring the idea-maze.

At This One, we’ve found that rapid prototyping can help us by:

  • Reducing Time-to-Market: Rapid prototyping allows us to quickly transform ideas into tangible products and experiences. This accelerates the design, testing, and refining stages of the product-discovery process.

  • Reducing Costs: By identifying and addressing flaws early in the process, rapid prototyping can help us avoid costly modifications later on. This also minimizes waste and reduces the overall cost of development.

  • Enhancing our Creativity and Innovation: Rapid prototyping allows us to explore ideas and concepts more freely. This helps us build more innovative and better-performing products.

  • Improving our Communication: Physical prototypes are an effective tool for conveying ideas, enabling better communication between team members, stakeholders, and potential customers or investors.

  • Improving User Experience: Rapid prototyping allows us to test and refine a product, ensuring it meets the needs and expectations of our users. This helps create a more satisfying user experience and increases the likelihood our products will succeed.

  • Mitigating Risks: By creating and testing multiple prototypes, we can identify potential risks and address them early in the development process. This reduces the likelihood of us encountering unexpected issues after the product's launch.

There are many great tools for rapid prototyping, ranging from wireframing tools such as Whimsical through to more high-fidelity, interactive tools such as Figma. These tools have undoubtedly changed the way that startups undertake product-discovery, by allowing them to complete build-measure-learn cycles faster than ever before.

Now there’s a new tool to add to this list: GPT-4.

Enter GPT-4

I’ve already read several other blog posts and Tweets about how people have used LLMs to build interactive prototypes in a matter of hours. Perhaps the most talked-about is a movie-recommendation app called “5 Movies” by Morten Just. I first heard about the post from Zefi Hennessy Holland, who is one of our partners from Sequoia. Given that recommending awesome movies is squarely within our current focus, Zefi encouraged me to give it a shot.

When I first chatted with Zefi, I thought about making an app that does roughly the same as Morten Just’s (specifically, something with a graphical interface that allows people to select movies they love, then return recommendations for similar movies, along with data about streaming availability). However, the more I thought about it, the more I questioned whether that would really help This One learn anything new. Our iPhone app already does that - and much more (waitlist here!). If I built an app like this, I’d be taking GPT-4 for a test-drive, but I wouldn’t really be testing a meaty hypothesis.

So I decided to up the stakes.

One of the most heavily requested features from our beta-testing community is the ability to use free-text input to describe what they’re looking for. Building our own prototype product for this use-case would require a large amount of research and engineering work. But by leveraging GPT-4's extensive knowledge base and natural language understanding, it seemed reasonable that I might be able to build a functional app quite quickly. I decided that this was the perfect prototype for my experiment.

Here’s a “how-to” guide for how I did it. Please feel free to use these prompts and modify them for your own projects!

How To Use GPT-4 to Build an iPhone App

Step 1: Getting Started

Getting the most out of GPT-4 really comes down to asking it the right questions in the right way. This was my first time using an LLM for a project like this, so Zefi helped me get started. He suggested the following prompt. 

You are an excellent iOS programmer. You are going to help me design a Swift iOS app. The goal of the app is to help a user find a film they might like to watch. The app’s core functionality is the following: The user writes a text-string to describe what they want. Then the app should use a combination of AI and a film database to select 3 films that the user is likely to love. Is there anything else I could help explain to enable you to build this?

GPT-4 will likely provide you with a long answer that describes how to undertake this project at a software company, including making a list of functional requirements, non-functional requirements, sketching UI designs and database design, creating a movie recommendations algorithm, testing everything with users, fixing bugs, and deploying it to the App Store. If you want to dive deeper, you could ask:

Can you create a design document please based on my inputs?

 

Step 2: Setting the Right Scope

When I saw GPT-4’s answer, I was pretty stunned. I’ve worked as a product leader for almost a decade, and GPT-4’s description of this product development process was among the most succinct and clear that I’ve ever seen. That being said, if you’re like me, you’ll want something that you can get running much more quickly.

I need to have you create this app in the next 1 hour. How can we do that?

It goes without saying: Your mileage may vary. But for me, it worked, and GPT-4 massively descoped the project. In its answer, it also descoped the AI part (it suggested to omit making real recommendations, and to instead show random movies), but I again followed Zefi’s advice and just told it to add that back in.

I need to keep the AI piece. How can we leverage existing off-the-shelf AI tools like chat-gpt to help bake this functionality into the POC?

This was a key learning for me: If it does something other than what you want, don’t be afraid to tell it.

 

Step 3: The WOW Moment

I was eager to get down to some code. I asked:

Can you create a basic SwiftUI UI layout for the POC, please?

It then proceeded to completely blow my mind. We went from “blank screen” to “iPhone app” in less than the time it would take to make a cup of tea.

I still remember the first time I ever saw Google. I still remember the first time I ever saw an iPhone. I will always remember those few minutes in exactly the same way. This changes everything.

 

Step 4: Configuring the IDE

I’ve never built an iPhone app before, so I needed some help setting up my developer environment.

Thanks - this looks great. I’m keen to start. You should presume I know nothing about software development and design, and that I have never shipped a product. I’ll need you to walk me through each stage in plain English. Are you able to do that?

GPT-4 will provide step-by-step instructions on how to configure Xcode and set up an Apple Developer account. (In this part of the journey, I needed to Google a few points for clarification, but everything was very easy and very smooth).

 

Step 5+: The Refinement Phase

From here, things get a little bumpy. I might have just been unlucky, but at least from my experience, GPT-4 makes a lot of small mistakes when writing code. Although it only took me about an hour to get this far, it then took about another 5 hours to finish my prototype app. I had to manage package dependencies, I had to debug API responses, and I had to fix many issues before the app would compile. In most cases, I did this by asking GPT-4 to help me, but in some cases I had to rely on my own knowledge, or on Google.

I got there eventually, but it wasn’t a complete breeze. Morten Just’s post notes that he got through this in a matter of minutes, but my experience was definitely a matter of hours. I think it’s important to set realistic expectations, especially for anyone who hasn’t written an app before.

To be clear: Even at 5 hours, this is absolutely mind-boggling. But I think it’s important to bring some realism to the conversations that are happening right now. Using GPT-4 was an incredible turbo-boost - perhaps the most impressive one I’ve ever experienced - but it wasn’t simply out-of-the-box magic.

Key Takeaways

I learned a lot in those 5 hours. Here are my 5 key takeaways:

  • The process involves a lot of luck. If you ask GPT-4 the exact same programming question twice in a row, you’ll probably get two wildly different answers. This can include using completely different packages or completely different approaches to the same problem. Given this high degree of randomness, I wonder whether the “standard” way to use LLMs will be to ask the same question multiple times, to read through all the different answers, then to choose which one to explore. I wish I’d taken this approach from the start, rather than assuming it would always be right first time.
  • It gets very weird things wrong. For me, the biggest challenge was getting my app to interface with an external API, so that it could send the user-input strings and receive the corresponding movie recommendations. GPT-4 made lots of mistakes in the way it tried to call the API and the parameters it tried to pass. I expect it will get better at these sorts of things very quickly, but for now, this sort of thing takes lots of trial-and-error (and lots of patience). It feels like guessing. It doesn’t feel like linear progress.
  • In all honesty, I think you need some traditional programming experience to make this work. You don’t need to be a professional engineer, but you probably do need to know your way around an IDE, and it really helps to have some instinct for why things might not be working. In my own project, I had to set breakpoints, step through functions I hadn’t written, write print statements, inspect variables … the usual stuff that an engineer would do. GPT-4 tried to help, but it could only solve the problems about 75% of the time. The rest were down to me.
  • “No” doesn’t mean “No”. There were quite a few situations where GPT-4 told me something wasn’t possible (such as creating a prototype quickly, or using AI to generate recommendations - see above). In those cases, my initial instinct was to accept that answer. But I quickly learned to push back and say “No, I actually need this”. In virtually all cases, GPT-4 then did what I asked.
  • But ultimately … it works. I have a fully working prototype that I’ve shipped to TestFlight and tested with real people. The prototype tests a heavily requested feature in a fully interactive way, and addresses a job-to-be-done that we know is prominent among our beta-community. I did all this in less than a day of work.

Conclusion

I’m still completely in awe of how GPT-4 helped me build a working prototype so quickly. It’s not a silver bullet, but it’s still mind-blowingly impressive. I’ve never written a line of Swift code in my life, and in just a few hours, I successfully developed a functional prototype iPhone app and shipped it to TestFlight. Despite some challenges along the way, the experience demonstrates the potential of GPT-4 to transform the product-development and prototyping process, even for companies building complex products.

If you’re intrigued by all the GPT-hype, and if you have at least a basic understanding of how to write code, I’d highly recommend that you jump in and give it a try. And if you’d like to see how experiments like these are impacting our core product experience at This One, you can join our Waitlist here.

Thanks to Zefi Hennessy Holland for your support throughout this project.