Understanding AI and Large Language Models (LLMs): A Guide for Security Engineers
In application security, Large Language Models (LLMs) have emerged as a powerful tool to help engineers identify vulnerabilities, distinguish false positives, and even suggest or generate secure fixes. Often on calls with customers, I get a lot of questions about how LLMs work and how Corgea leverages them. I wanted to write this overview to help security teams get more familiarized with the technology.
Understanding how LLMs operate and the principles behind their functionality—such as tokens, inference, and semantic context—is essential for maximizing their potential. Additionally, prompt engineering and fine-tuning can further customize LLMs to address specific security challenges, improving their accuracy and reliability.
This guide explains the mechanics behind LLMs and how they work in practical security contexts, illustrated by detailed code examples.
What is AI? A Brief History from Alan Turing to LLMs
The field of Artificial Intelligence (AI) dates back to British mathematician Alan Turing, who, in the 1950s, laid the foundations for modern computing and machine intelligence. Turning asked the fundamental question, "Could machines think?" in his seminal paper on artificial intelligence. Turing proposed the Turing Test to determine if a machine could exhibit intelligent behavior indistinguishable from a human’s. Initially, AI evolved through rule-based systems, where explicit “if-then” statements guided machine behavior.
With more techniques becoming available available in the 1980s, Machine Learning (ML) emerged, allowing computers to learn from examples rather than following coded instructions. However, it wasn't until the early 2000's when data became more prevalent that Machine Learning started to take off. This shift introduced decision trees, neural networks, and support vector machines—tools that could detect patterns in data and adapt over time.
Great technological shifts are often a confluence of multiple technologies coming together. The first being Deep Learning and Natural Language Processing (NLP) became popular in the 2010s, leveraging large datasets and complex neural networks. The introduction of the Transformer architecture in 2017 marked a significant breakthrough. Unlike earlier models, Transformers use self-attention mechanisms, enabling them to process large amounts of text and capture long-range dependencies in language. This capability led to the development of LLMs: highly sophisticated models that can understand and generate coherent language across both human and coding languages, revolutionizing NLP and application security.
This breakthrough technology was enabled by two other factors: the corpus of data provided by the public internet, and GPUs becoming more powerful and cost-effective. A corpus of data refers to the massive, structured collection of content (articles, books, code, and more) used to train LLMs. The quality, diversity, and size of this corpus are critical, as they enable the LLM to learn language patterns, vocabulary, and syntax across varied topics and languages. Additionally, LLMs needed compute power in the form of GPUs to piece it all together. Why GPUs? They’re much faster and more capable at calculating mathematics and arithmetic compared to slower CPU counterparts. LLMs are essentially prediction machines, with mathematics at the heart of the technology.
Key Concepts in LLMs
Tokens: LLMs break down input text into small chunks called tokens, each representing a part of a word, a word, or even punctuation. Tokens enable the model to analyze and predict text in manageable pieces, making it more responsive to subtle patterns. In a security context, tokens allow LLMs to identify specific phrases like “password” in test files, differentiate them from production code, and flag or dismiss them accordingly. You can think of tokens as puzzle pieces that the LLM assembles based on learned patterns, forming a coherent output based on contextual clues.
Inference: This is the model’s process of generating responses based on input tokens. When given a prompt, the LLM uses inference to predict the next token in a sequence by examining patterns it learned during training. This is similar to predicting moves in chess: the model analyzes what came before to anticipate what might come next. In security scenarios, inference helps LLMs analyze code snippets and predict potential vulnerabilities based on the detected context.
Semantic Context: This is the model’s understanding of relationships and meanings within language. LLMs consider not just individual tokens but the broader semantic context, recognizing the functional purpose of different parts of a codebase. This ability helps LLMs interpret if a password in a test file is harmless or if file upload code lacks restrictions, making it risky. In this sense, semantic context allows the LLM to read between the lines—understanding not just what words mean individually but how they function within the larger scope.
Transformer Architecture: This neural network architecture underpins LLMs. The key innovation in Transformers is the self-attention mechanism, which allows the model to weigh each token’s importance in relation to other tokens in the input. This parallel processing capability enables Transformers to understand both short- and long-term dependencies within text, making them more powerful and efficient than previous models for handling large datasets and complex language tasks.
LLMs are language-agnostic, meaning they can interpret and respond to multiple languages, including coding languages, with the same adaptability. This versatility enables LLMs to analyze JavaScript and Python code without additional tuning, making them ideal for polyglot environments where security concerns span different programming languages. Additionally, it can understand that “password” written in any language in the world equally.
Example 1: Recognizing a False Positive Vulnerability
In the following code snippet from a test file, a hard-coded password is used. Traditional static analysis tools might flag this as a security risk. However, an LLM with context awareness can discern that this password is used within a test environment. This is indicated by /test/api/deluxeApiSpec.ts
and the use of the frisby API testing framework.
LLM Analysis
The LLM begins by tokenizing the input. It identifies key tokens such as password
and email
, understanding that these often relate to sensitive data. Next, the model’s inference process examines these tokens within the semantic context of a test file (noted by describe('/rest/deluxe-membership')
in the frisby test case). The model recognizes that test files often use hard-coded data for automation purposes and infers that the hard-coded password is likely non-critical.
Prompt engineering plays a crucial role here. Security-specific prompts help the LLM identify context, instructing it to prioritize certain features (like whether code resides in a test file) while disregarding others, reducing false positives. Fine-tuning the LLM on test data and production code examples further trains it to recognize these distinctions, making it even more accurate at flagging relevant risks.
Example 2: Identifying Plaintext Password Storage
LLMs are also effective at spotting nuanced issues, like storing passwords in plaintext. Below, passwords are saved in a Django model without encryption, making them accessible to anyone with database access. This example demonstrates how an LLM interprets context and why prompt engineering is essential for improving vulnerability detection.
LLM Analysis
Here, the model first tokenizes key terms like password
, login
, and models.Model
, recognizing that these are frequently associated with user authentication, data models, and a Django application. With inference, the LLM assesses this pattern in the semantic context of a database model for user credentials. Based on the LLM’s training, it identifies plaintext storage as a vulnerability, noting that best practices require password hashing before storage.
Fine-tuning improves the LLM’s ability to detect and prioritize critical issues. Security engineers can fine-tune the LLM with examples of hashed versus plaintext storage, which reinforces the model’s understanding of security standards.
Prompt engineering also refines the analysis by prompting the LLM to detect and suggest secure storage alternatives. For instance, engineers might use a prompt like, “Identify and fix security vulnerabilities in database models,” guiding the LLM to respond with a hashing function like Django’s make_password
:
pythonCopy codefrom django.contrib.auth.hashers import make_password class Login(models.Model): user = models.CharField(max_length=200) password = models.CharField(max_length=300, default=make_password) # Secured: hashed password
By combining prompt engineering and fine-tuning, the LLM provides a targeted, practical solution that meets best security practices, adding real value for security engineers.
Example 3: Auto-Fixing Dangerous File Upload Patterns
In the following example, users can upload files without restriction, creating a risk if malicious files are uploaded. LLMs can analyze this pattern and suggest restricting uploads to safer file types.
Original Code:
LLM Analysis
In this example, the LLM tokenizes terms such as upload
, file
, and os.path.join
and analyzes them within the semantic context of a file upload function. Using inference, the LLM identifies potential security risks, like accepting unrestricted file types or file paths without validation. The LLM’s contextual awareness helps it recognize that allowing unrestricted file uploads might open the application to malicious file types.
Using prompt engineering, a security-focused prompt like, “Analyze and restrict dangerous file types in upload functions,” can guide the LLM to suggest appropriate code fixes. After fine-tuning on examples of secure upload handling, the LLM generates the following fix:
Suggested Fix:
In this corrected code, the LLM suggests checking for safe file extensions, such as .txt
and .jpg
, by defining a list of allowed extensions (ALLOWED_EXTENSIONS
). Using a function like allowed_file(filename)
restricts the file types that can be uploaded, mitigating the risk of malicious file types.
The role of prompt engineering here is critical. A well-crafted prompt helps the model focus on finding unrestricted upload patterns, while fine-tuning with secure examples reinforces its ability to generate safe solutions for file handling. Together, prompt engineering and fine-tuning transform the LLM from a passive analyzer into an active, security-aware assistant.
The Role of Prompt Engineering and Fine-Tuning in LLMs for Security
Prompt engineering is the process of carefully crafting prompts to elicit the most useful, contextually accurate responses from LLMs. In security engineering, effective prompts guide LLMs to recognize specific vulnerabilities, interpret the context of code, and provide tailored solutions. For example, prompts like “Identify storage vulnerabilities in authentication models” can help the LLM focus on specific security concerns rather than returning generalized answers.
Fine-tuning involves training an LLM on security-specific data to improve its accuracy. By providing examples of both secure and insecure code patterns, security engineers can shape the model’s responses to align with best practices, making it highly effective in complex security contexts. Fine-tuning allows the model to develop a nuanced understanding of security standards across different programming languages and environments.
Together, prompt engineering and fine-tuning create a targeted, adaptable model that understands security-specific nuances. The LLM becomes proficient not only in recognizing risks but also in offering actionable recommendations—making it a valuable partner for security engineers.
Why LLMs are Transformative for Application Security
These examples illustrate how LLMs can analyze and address code vulnerabilities effectively, leveraging tokens, inference, semantic context, and Transformer architecture to provide accurate results. Prompt engineering and fine-tuning further empower these models, allowing them to distinguish between test and production code, reduce false positives, and deliver actionable fixes.
Ready get rid of false positives?
Harden your software in less than 10 mins'