Home » 3 Steps to Context Engineering a Crystal-Clear Project

3 Steps to Context Engineering a Crystal-Clear Project

level of prompt engineering

Wouldn’t it be amazing to easily understand any software source code and gain a better view of even the most complex of projects?

The continual augmentation of AI throughout enterprises has had the effect of making work much easier — but also more complex. Between AI generated code and quicker turnaround times for deliverables, companies around the world are pushing creative output to a new level.

In this article, you’ll learn three easy steps for gaining an intelligent picture for any project by using the skill of context engineering.

Building upon your personal knowledge

Context engineering is a technique for prompting an LLM with specific knowledge to complete a task.

This method of providing context is the same technique used in retrieval augmented generation (RAG), where contextual data or conversation history is provided along with each request to an LLM. This additional knowledge is used to intelligently answer the question at hand.

Context can consist of internal or private data that the AI would not normally have been trained upon — which is what makes this style of prompt engineering so powerful.

A real world example for software developers

Context engineering is highly effective for understanding an app’s source code and interconnected systems.

While accessible AI such as ChatGPT and Copilot offer varying ways of integrated access within a development environment (IDE), it can become complicated or even impossible to span questions across multiple code-bases or architectures.

This is a perfect use-case for context engineering. Here’s how to use it!

Step 1. Build the context

Our goal is to understand a software’s source code that happens to span across multiple storage repositories.

This would normally be a complex task, involving searching through the code in various locations, pulling in diagrams for different sources, and trying to understand all of the disparities. Rather than manually searching through each individual project, we can build a context and allow the AI to intelligently perform this work for us.

This process begins by formulating the context.

Chatting with the source code

Context can be built by having a simple conversation with the AI about one of the projects.

Using the Copilot built into the software development environment provides a convenient way for building this context. A developer working on an unfamiliar project can simply chat with the source code.

As an example, consider a web development project that has one repository for a client-side UI and a second project for a server-side database. Both projects are hosted in separate repositories on GitHub.

We can build an execution flow across both projects by starting with an outline.

A web application consisting of two projects that span multiple repositories. Source: Author.

Generating an outline

The first of the projects (the client) can be loaded in a software development IDE, from which we can ask the AI copilot to generate an outline of an execution path.

Suppose we are trying to understand how clicking a button in the application results in saving a record into the database. We might simply ask the Copilot how the button works. This conversation would include asking for an outline of the main functions that are executed after the button is clicked until the request is sent to the server, including function names and parameters.

> Make an outline of the execution path after the submit button is clicked, including the HTTP POST request to the server side code, the endpoint method that receives the payload, and any validation that is performed on the client.

Once we have an outline from the first project as context, it’s time to move on to the second.

Step 2. Use the context

The output from the conversation with the first project can now be used in order to better understand the second.

Since AI Copilots can typically only work with the currently loaded project, we’ll need to load the second project into the same IDE and start a new conversation. We can ask the Copilot the same questions — to generate an execution path from the behavior of the button click. However, this time, we can include the response from the first project, effectively providing context to the LLM.

Notice how we’re carrying over the conversation from the first project into the second, allowing the LLM to leverage a more detailed understanding of both projects in order to combine the result into a unified answer.

> Make an outline of the execution path after the form is submitted, including the endpoint method that receives the payload, and any validation that is performed on the server before a response is returned. Use the following client-side execution path as context: [context]

Extending context over multiple projects is just one amazing part. We can actually take this one step further to create a graphical flowchart.

Step 3. Visualize the result

An outline of the software execution that spans across two projects provides a textual view of the program’s behavior, but we can do even better.

We can reuse the joint context from our prior conversations with the AI to generate a comprehensive visualization. Multimodal models including ChatGPT, Sonnet, and Gemini, are perfect for this purpose.

> The following describes the complete execution for submitting a business form. Generate a flowchart using Mermaid Markdown, compatible in a GitHub pull request, and include a text description of all events in the flowchart.

A flowchart is generated using Mermaid. The result is compatible with GitHub pull requests and can be directly displayed within the PR description.

flowchart TD
A[User fills out Business Profile Form] → B[Client-side Validation]
B →|Valid| C[HTTP POST /api/contact]
B →|Invalid| Z[Show Validation Errors]
C → D[ASP.NET Endpoint ContactController]
D → E[Server-side Validation .NET Data Annotations & Custom Attributes]
E →|Valid| F[Process Data, Save to DB, Send Email]
E →|Invalid| Y[Return Validation Errors]
F → G[Return Success Response]
Y → H[Client Receives Error Response]
G → I[Client Receives Success Response]
H → J[Show Server Validation Errors]
I → K[Show Success Message]

The resulting flowchart is rendered in GitHub, providing a clear picture of the complete execution of the software.

A flowchart spanning multiple projects through context engineering. Source: Author.

Taking a pull request to the next level

Flowcharts are not just effective for understanding the code-base as a developer, they’re also a great way to document and even present your work to peers.

The process of using context engineering across multiple prompts allows carrying over knowledge between multiple projects to obtain a single cohesive result.

Displaying this result as a flowchart directly in a pull request provides a professional level of documentation that can be quickly and easily understood by others.

A stepping stone towards higher AI

As we’ve seen, context engineering can be leveraged to generate powerful flowcharts for understanding the code across multiple repositories.

However, perhaps this manual process is merely an intermediate step to when a more powerful AI becomes available. After all, there has been a steady advancement of AI in software development. Nonetheless, as we’ve seen in prior years with prompt engineering, it’s important to leverage the power of AI copilots to augment skill as a developer.

By creating easily understandable code changes with AI-powered flowcharts, you can enhance your programming output and demonstrate your skill with AI.

How have you used AI to boost your work? Let me know!

About the author

If you’ve enjoyed this article, please consider following me on Medium, Bluesky, LinkedIn, and my website to be notified of my future posts and research work.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *