Post

LLM-Powered CLI

Transform your command line experience by integrating local LLMs directly into your terminal workflow. This guide shows you how to use a simple yet powerful bash/Powershell wrapper for Ollama where you dynamically integrate an LLM into your security, platform engineering, development, or sysadmin tasks.

LLM-Powered CLI

Overview

If you've been following the AI space, you've probably used ChatGPT, Claude, or other Large Langauge Models (LLM) through web interfaces. But what if you could harness that same power directly in your terminal, where you spend most of your day as an IT professional?

TL;DR: In this post, I'll show you a simple yet powerful script that connects your terminal to a local LLM (via Ollama). You'll learn how to pipe command outputs directly to an AI for analysis, or use it to generate code snippets, command pipelines, etc. on demand. This will significantly amplify your CLI workflow! Below, see 10 practical examples that will save you time and mental bandwidth.

Why Local LLMs?

Although I'm grateful for the availability of relatively inexpensive cloud LLMs, those costs do add up, we have zero privacy, and almost everyone has extra compute available to run these at home (or in your office, if policies allow it). If you have a machine with a decent GPU (RTX 3060 or better), or have a modern Apple silicon Mac, you can run capable LLMs locally. The easiest implementation that I've found is Ollama, and it works very nicely with OpenWebUI for a ChatGPT-like experience.

So now, instead of every query costing your $0.002483754 or more, you don't need to consider cost or compromise privacy when using your LLM, you can just use it from your local models on your private network.

LLMs and Privacy:

I'm not so much talking about privacy from the perspective of "I don't want OpenAI to know what I'm doing" (although that can be a valid concern), but rather that you could accidentally expose sensitive data to the cloud. This is especially important for security professionals, platform engineers, and sysadmins who often work with sensitive data.

Do you know what their data security and data retention policies are? Do you know if they are using your data to train their models? Do you know if they are sharing your data with third parties? Do you know if they are storing your data indefinitely?

In other words, if you are working with public LLMs, you have to be mindful if your payload includes sensitive data (like PII, account names, hostnames, etc) or secrets (like passwords, API keys, etc). With a local LLM, you can use it without worrying about that, or at least worrying about it less since you own and manage the LLM platform.

With that said, I will likely expand this to support OpenAI, Anthropic, and others, but this first version is focused on Ollama.

The Problem: Context Switching Kills Productivity

As security professionals, platform engineers, developers, and sysadmins, we often spend a lot of time in the terminal. Yet when we need AI assistance, we're constantly switching to browser tabs, copying output, pasting it into chat interfaces, copying responses back… the workflow is clunky at best.

There's a profound disconnect between where we work (terminal) and where we access AI assistance (browser). This creates friction that:

  1. Interrupts our flow state
  2. Makes us less likely to leverage AI for quick tasks
  3. Creates a barrier to incorporating AI into automated workflows
  4. Prevents us from using terminal outputs as direct context
  5. Limits our ability to integrate AI into our existing workflows seamlessly

What if we could eliminate this friction entirely?

A Solution: A Pipeable Script in the Terminal

I created a simple yet powerful script that lets you interact with a local LLM right from your terminal. The script supports:

  1. Direct prompting: llm "Explain how DNS works"
  2. Getting Piped input as context: cat error.log | llm
  3. Using piped input as the prompt: echo "What's the capital of France?" | llm
  4. Mixed mode - getting piped input AND including a prompt: git diff | llm "Suggest a commit message for these changes"

That means that you can take advantage of an LLM at different points in your workflow, whether you want to generate a new command, analyze existing output, or summarize or format information.

Put another way, you can use the LLM at various stages of your command pipeline:

At the Beginning of a pipeline

Have the LLM generate the first part of a pipeline, and then use that as the input to the next command:

1
2
llm "Generate 25 first names, one per line. Only output the names." \
    | sort -u | ./name-processor.sh

In the Middle of a pipeline

You can use the LLM to analyze, sanitize, augment, or summarize the output of a command, and then use that as input to the next command.:

1
2
cat error.log | llm "Summarize these errors into JSON format" \
    | jq -r '.[] | {error: .error, severity: .severity}'

At the End of a pipeline

You can use the LLM at the end of a pipeline to process or format the output of a command. In the example below, we use the LLM to first organize the unstructured data into a JSON format, and then generate a Markdown report from that (both output to ./report.md AND then view it in the terminal with glow):

1
2
3
4
5
grep -R -i "error" | head -n 25 \
    | llm "summarize these errors in JSON format. First, have an array of \
    severities, then within each severity, an array of specific errors" \
    | llm "Generate a 1 page executive report in proper Markdown \
    format of these findings and include recommendations." | tee ./report.md | glow -p

Where to find the script:

The script (available as Bash or Powershell), can be found here:

https://github.com/robertsinfosec/llm-cli/

How It Works

The script is fairly straightforward:

  1. It accepts either a direct prompt argument, piped input, or both
  2. Properly formats the prompt (combining context with instructions when both exist)
  3. Safely constructs a JSON payload for the Ollama API
  4. Makes an API call to your local Ollama instance
  5. Parses and formats the response for clean terminal output

10 Novel Ways to Use LLMs from the Bash Prompt

If you are not seeing it yet, maybe this will help. What we're talking about here is being able to use an LLM right in the terminal, INLINE with your existing command line tools. So, using the LLM as a universal translator for your existing command line tools. This is a game changer, and it opens up a world of possibilities.

Once you have this script in place (and an Ollama instance running locally or on your local network), you can start doing some surprisingly powerful things in the shell. Let's explore 10 practical examples that demonstrate how this can transform your terminal work, even before we get to automating them with aliases:

Idea 1. Auto-Summarize Git Diffs for Commit Messages

1
2
3
4
5
6
git diff | llm "Summarize these code changes in a ConventionalCommits.org way, for \
    a concise and informative commit message."

# (Optional) Send directly to the git commit command
git diff | llm "Summarize these code changes in a ConventionalCommits.org way, for \
    a concise and informative commit message." | git commit -F -

This helps generate meaningful commit messages based on the actual code changes. This is a timesaver for fast-moving branches or teams practicing conventional commits, and it captures details you might otherwise miss.

Idea 2. Explain a CI/CD Pipeline

1
2
cat .github/workflows/deploy.yml \
    | llm "Explain what this GitHub Actions workflow does."

Very useful for understanding unfamiliar pipelines in inherited repositories. Within seconds, you can understand complex automation workflows without navigating documentation. This works with GitHub Actions, GitLab CI, Azure Pipelines, Jenkins, and other CI systems.

Idea 3. Audit a Dockerfile for Security or Best Practices

1
2
3
cat Dockerfile | llm "Act as a senior-level security engineer and container \
    expert. Review this Dockerfile for any common security, performance, or \
    layering issues."

This helps spot issues like latest tag usage, missing USER directives, unnecessary package installs, and improper layer ordering. It's like having a container security expert reviewing your work instantly.

Idea 4. Review Cloud IAM Roles for Least Privilege

When working with cloud IAM roles, it can be difficult to determine if they follow the principle of least privilege, for example.

AWS Example:

1
2
aws iam get-role --role-name MyAppRole \
    | llm "Does this IAM role violate the principle of least privilege?"

Azure Example:

1
2
az role definition list --name "Contributor" \
    | llm "Does this Azure role follow least privilege best practices?"

GCP Example:

1
2
gcloud iam roles describe custom.viewer \
    | llm "Is this GCP role overly permissive?"

Get instant analysis of IAM policies and roles across cloud providers without navigating multiple dashboards or wading through documentation.

Idea 5. Generate or Fix Kubernetes YAML

1
2
3
4
5
6
llm "Write a Kubernetes deployment YAML for a service named 'webapp' using \
    the image nginx:latest."

# Optionally, also apply it:
llm "Write a Kubernetes deployment YAML for a service named 'webapp' using \
    the image nginx:latest." | kubectl apply -f -

Here is something similar for Docker Compose:

1
2
3
4
5
6
llm "Write a Docker Compose YAML for a web service using nginx:latest, we need \
    volume mappings to /opt/myapp/ and /opt/myapp/logs, and expose port 80 to \
    the host. We then want Traefik to route traffic to this service, redirect \
    HTTP to HTTPS, and using the domain myapp.example.com, we want Traefik to \
    use Cloudflare DNS for the Let's Encrypt certificate. **Output only the raw \
    YAML and no other text, and not in a markdown code block - just the text.**"

Use these types of command chains to bootstrap K8s manifests quickly, or validate YAML snippets before applying them with kubectl. You can also pipe existing YAML for suggested improvements or debugging help.

Idea 6. Summarize Cloud JSON Output

When working with cloud providers, their CLI tools often output JSON that can be overwhelming. You could pipe this output to jq for example, but still, it's difficult for a human to process these quickly. Instead, or in addition to the human element, use the LLM to summarize or extract key information.

AWS Example:

1
2
aws ec2 describe-instances \
    | llm "Summarize the instances by type, name tag, and availability zone."

Azure Example:

1
2
az vm list --output json \
    | llm "Summarize these Azure VMs by size and OS."

GCP Example:

1
2
gcloud compute instances list --format=json \
    | llm "Summarize the running GCP instances by region and machine type."

Quickly understand the shape of your infrastructure without digging through JSON or building complex jq filters. This can save you the time of parsing complex cloud provider outputs.

Idea 7. Explain Complex Shell Scripts or Generate One-Liners

1
2
3
4
5
6
7
8
9
10
11
# Analyze a script
cat scripts/setup.sh \
    | llm "Explain what this shell script does and if any steps are \
    risky or outdated."

# Generate a one-liner
llm "Write a bash one-liner to find all files larger than 100MB in the \
    current directory and its subdirectories, then output the path and \
    filename, and the size, sorted from the largest to the smallest."
# Outputs something like: 
#   find . -type f -size +100M -exec du -h {} + | sort -hr

Perfect for vetting shell scripts before running them — especially those from GitHub, Stack Overflow, or unfamiliar repositories. It's like having a senior engineer review scripts before you execute them.

Idea 8. Review Dependency Vulnerabilities

1
2
npm audit --json \
    | llm "Summarize critical and high vulnerabilities by package."

Or with Python:

1
2
3
# Pip uses a buffered output so we need to collect the output with `cat` first:
pip list --outdated | cat \
    | llm "Which packages are outdated and which ones are security critical?"

Get a quick risk-focused summary instead of parsing endless vulnerability output. This helps prioritize remediation efforts based on actual risk.

Idea 9. Analyze Log Files for Security Events

1
2
tail -n 100 /var/log/auth.log \
    | llm "Are there any suspicious login attempts in these logs?"

Great for incident response, threat hunting, or triage. The model can identify patterns that might indicate brute force attempts, unusual access times, or other suspicious behavior across auth logs, web server logs, or application logs.

Idea 10. Translate man Pages into Human Language

1
2
man find \
    | llm "How do I use the find command to locate all files larger than 100MB?"

or

1
2
3
4
5
6
7
man tar | llm "How do I extract the contents of /tmp/archive.tar.gz to \
    /tmp/extracted? Output only the command to run."

# (Optionally) with a well-crafted LLM prompt, you could potentially get 
# the command to run, and then pipe that to bash
man tar | llm "How do I extract the contents of /tmp/archive.tar.gz to \
    /tmp/extracted? Output only the command to run." | bash

Converts dense man pages into directly usable examples and explanations. This is especially helpful when learning new tools or helping junior team members understand complex commands.

Tips for Effective CLI-based LLM Usage

After using this approach for some time, I've discovered a few patterns that make it even more effective:

Be Specific with Your Instructions

The more specific your instruction, the better the output. Compare:

1
2
3
4
5
6
7
8
# Too vague
cat error.log | llm "What's wrong?"

# Much better
cat error.log | llm "Based on these Node.js errors, what's the most likely \
    cause and how would I fix it? Please output in proper Markdown format \
    with a summary, problem, and recommendations section, with at least a \
    paragraph in each summarizing that section."

Use Output Formatting Instructions

You can request specific output formats to make terminal responses more usable:

1
2
cat nginx.conf | llm "List all security issues as numbered bullet points \
    with severity (High/Medium/Low)"

or

1
2
3
4
5
cat error.log | llm "Act as a senior technical writer and create a formal \
    Markdown, 1-page Executive Summary of these errors. Include a summary, \
    problem, and recommendations section, with at least a paragraph in each \
    summarizing that section. Include in the Recommendations section at the end \
    bullet points of actionable steps, and most important recommendations first."

Chain Commands for Advanced Processing

As alluded to throughout, the main amplifier here is to feed the LLM data to process, and/or to use the LLM to generate data to feed to other commands. It also might be a combination of both, the sky is the limit. For example:

Pipe to a Shell

You can use the output of the LLM as a command in your shell. For example, if you want to generate a bash script based on a prompt, you can do:

1
2
3
# Generate a bash script and execute it (with review)
llm "Write a bash script to find all large log files" \
    | tee cleanup.sh && chmod +x cleanup.sh

This will generate the cleanup.sh script, output it to the terminal, and also save it as cleanup.sh. Then, if successful (&&), it will make it executable.

Or above, we had a similar command for extracting a specific tar file:

1
2
man tar | llm "How do I extract the content of /tmp/archive.tar.gz to \
    /tmp/extracted? Output only the command to run." | bash

WARNING:

Always review any generated code before execution, especially if you plan to run it with elevated privileges.

Pipe to JSON

If you want to use the output of the LLM in a JSON format, you can use jq to format it:

1
2
cat error.log | llm "Summarize these errors in JSON format" \
    | jq -r '.[] | {error: .error, severity: .severity}'

Pipe to Markdown

By the default, the LLM is going to return Markdown. If you want to make that a little prettier, consider installing glow and using it to render the output. It even supports Markdown tables too.

1
cat error.log | llm "Summarize these errors in Markdown format" | glow

Remember Context Size Limits

The script implements a 100KB limit on input size to prevent overwhelming the model. For large files, consider using head, tail, or grep to extract the most relevant portions:

1
2
3
4
5
6
7
8
9
10
11
12
# Grep and get 10 lines before (B) and after (A) the match, then just return 
# the top 200 lines
grep -A 10 -B 10 "ERROR" large_log_file.log | head -n 200 \
    | llm "What's causing this error?"

# Get just the last 200 lines from the end of the file
tail -n 200 large_log_file.log \
    | llm "Summarize the last 200 lines of this log file."

# Get the first 100 lines
head -n 100 large_log_file.log \
    | llm "Summarize the first 100 lines of this log file."

Replacing Traditional Man Pages & Command Help

One of the most powerful aspects of this approach is how it essentially replaces—or at least substantially augments—traditional Unix help systems like man, apropos, and --help flags. Instead of memorizing arcane syntax or wading through exhaustive documentation, you can ask questions in plain language:

Natural Language Queries vs. Man Pages

1
2
3
4
5
6
# Instead of this:
man find | less

# Try this:
llm "In Ubuntu, in bash, How do I find all files larger than 50MB modified \
    in the last week?"

The difference is striking. While traditional man pages provide comprehensive documentation, they're optimized for reference, not for answering specific questions. The LLM approach gives you:

  1. Task-oriented answers rather than feature-oriented documentation.
  2. Complete commands you can copy/paste instead of having to piece together syntax, or you can append "Only output the command to run" to get just the command.
  3. Contextual explanations that clarify not just how, but why.

Beyond Apropos: Finding Commands You Don't Know

1
2
3
4
5
# Instead of this:
apropos "disk space"

# Try this:
llm "What command can I use to analyze disk space usage on Linux?"

Unlike apropos, which searches only command descriptions, the LLM understands intent and can recommend commands based on what you're trying to accomplish, not just keyword matches.

Enhancing Output with Terminal Formatting

Pair your LLM CLI tool with terminal formatting utilities for even better results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Using glow for Markdown rendering:
llm "How do I configure nginx for multiple domain names?" | glow -p

# This works better with more Prompt Engineering and more detail:
llm "Act as a senior systems engineer who specializes in Nginx. How do I configure \
    nginx for multiple domain names? I need to set up app1.example.com and \
    app4.example.com on this machine, and we get our certificates from an internal \
    ACME server (on https://acme.int.example.com), and we prefer to use certbot \
    for that with a certbot renew that is run via a cronjob every Sunday night \
    at 2am. App1 is a container app that is available http://127.0.0.1:3000 and \
    App4 is a container app listening on http://localhost:9080. For the reverse \
    proxy we need to pass through all of the headers, and for each service they \
    want to have a customized 404, 500, and 503 page. Please create a formal \
    Markdown step-by-step guide on how to do this, with sections, and within \
    each section, at least a paragraph that explains that step, with codeblocks \
    of any examples that will help - assume I am logged in as root and this \
    is Ubuntu 24.04." | glow -p
# This outputs a VERY high-quality, well-formatted and very custom 
# Step-By-Step guide for your specific use case. Plus `glow -p` shows it nicely
# formatted and the `-p` shows it as "pageable", so you can scroll through it

# Using bat for syntax highlighting:
llm "Write a bash script to back up my home directory" | batcat -l bash

Generating Complex Commands On-Demand

For sophisticated one-liners that would typically require googling or StackOverflow:

1
2
3
4
5
6
7
8
llm "How do I list the files in /var/log sorted by the most recently \
    modified, showing only the first 5 results? Output just the command to \
    run in raw text."

# Optionally, send that command to bash
llm "How do I list the files in /var/log sorted by the most recently \
    modified, showing only the first 5 results? Output just the command to \
    run in raw text." | bash

And receive directly usable commands like:

1
ls -lt /var/log | head -n 5

Generate-n-Send!

For complex command generation, explicitly tell the LLM to "Output just the command to run as raw text and without formatting" when you want a clean command without explanations. As should above, you could even send that command to bash, to run, with ` | bash`. You can always ask for details separately.

Obviously, only do this with with benign commands, and always review before executing.

Creating Quick In-Terminal Tutorials

Need a step-by-step guide without leaving your workflow?

1
2
llm "Tell me step by step how to locate the specific access.log for the \
    nginx website www.example.com that is hosted on this machine" | glow

This gives you an instant, well-formatted tutorial right in your terminal, without hunting through documentation or switching to a browser.

Setting Up Your Own LLM CLI

To get started with your own LLM-powered terminal:

  1. Install and run Ollama - ollama.com
  2. Pull your preferred model - ollama pull qwen3:4b (or llama3.2, or your preferred model)
  3. Save the script - Copy the bash script from above to /usr/local/bin/llm.sh or I put mine ~/llm.sh.
    • If you use /usr/local/bin/llm.sh, you can create a symlink to it in /usr/local/bin/llm with ln -s /usr/local/bin/llm.sh /usr/local/bin/llm
    • If you use ~/llm.sh, you can create an alias in your .bashrc or .zshrc file like alias llm='~/llm.sh'
  4. Make it executable - chmod +x ./llm.sh
  5. Update the host/model - Add the environment variable LLM_HOST to your .bashrc or .zshrc file so that the script knows where to find the Ollama API. For example, if you are running Ollama on your local machine, you can set it like this:
    1
    2
    
    export LLM_HOST="http://localhost:11434"
    export LLM_MODEL="llama3.2:3b"
    
  6. Reload your shell - source ~/.bashrc or source ~/.zshrc

That's it! Now you can start using the llm command in your daily workflow.

Level Up: Creating Reusable Aliases

Okay, so we've seen how powerful piping data to and from an LLM directly in the terminal can be, even when done manually. Those 10 examples demonstrate the raw capability. Having an infinitely knowledgeable assistant ready to analyze, summarize, or generate text based on the context you provide. But copy-pasting those complex pipeline commands every time? That gets old fast and doesn't scale.

This is where we move from ad-hoc usage to integrated workflow. The key? Bash aliases and Bash functions.

By creating simple aliases in your .bashrc or .zshrc file, you can encapsulate these powerful LLM-driven pipelines into short, memorable commands. This involves not just creating a shortcut, but often refining the prompt with better engineering (like using "Act as a…" personas) to make the alias consistently useful. This makes leveraging the LLM for common tasks effortless and integrates it seamlessly into your daily command-line habits.

This post is primarily about bash usage but the same concepts apply to Powershell as well. In Powershell, an alias is a simple command that you can use to run a longer command. You can create an alias in Powershell using the New-Alias cmdlet.

Why Aliases Rock:

Aliases transform complex, multi-step commands into simple, reusable shortcuts. Instead of remembering intricate piping and manually crafting LLM prompts each time, you just type your custom command. This drastically lowers the friction of using LLM assistance for routine tasks and represents a higher level of maturity in integrating AI into your workflow.

Example Aliases for Daily Use

Here are a few ideas, building upon the manual examples, but now crafted as reusable aliases or functions with more robust prompts. These are ready to be dropped into your shell configuration file (like ~/.bashrc or ~/.zshrc):

1. Git Commit Message Helper: Summarize staged changes for a conventional commit message.

1
2
3
4
# Usage: llmcommit (Alias works here as no arguments needed)
alias llmcommit='git diff --staged | llm "Summarize these code changes in \
    ConventionalCommits.org format for a concise commit message. Output only \
    the summary text."'

2. Explain a Script: Quickly understand what a script does.

1
2
3
4
5
6
7
8
9
# Usage: llmexplain ./myscript.sh
llmexplain() {
  if [[ -z "$1" || ! -f "$1" ]]; then
    echo "Usage: llmexplain <script_filepath>"
    return 1
  fi
  cat "$1" | llm "Explain what this shell script does, \
    highlighting any potentially risky or complex parts."
}

Note:

This is a function, not an alias, because it takes an argument (the script file). A Bash alias cannot take arguments like $1. So, we use a function instead. And if we're going to do a function, we might as well make it a little more robust with some error checking.

3. Dockerfile Security Review: Get a quick security assessment of a Dockerfile.

1
2
3
4
5
6
7
8
9
10
# Usage: llmdockerreview Dockerfile
llmdockerreview() {
  if [[ -z "$1" || ! -f "$1" ]]; then
    echo "Usage: llmdockerreview <Dockerfile_path>"
    return 1
  fi
  cat "$1" | llm "Act as a senior security engineer. \
    Review this Dockerfile for security issues, performance bottlenecks, and \
    best practice violations. List findings as bullet points."
}

4. Summarize JSON Output (e.g., AWS CLI): Get a human-readable summary of verbose JSON, taking the summary instruction as an argument.

1
2
3
4
5
6
7
8
9
# Usage: aws ec2 describe-instances | llmsummarize "Summarize instances by type and state."
llmsummarize() {
  if [[ -z "$1" ]]; then
    echo "Usage: <command_producing_json> | llmsummarize \"<summary_instruction>\""
    return 1
  fi
  # Reads from stdin, passes instruction as argument to llm
  llm "Summarize the following JSON data based on this instruction: $1"
}

5. Analyze Log Files: Check recent log entries for anomalies.

1
2
3
4
5
6
7
8
9
# Usage: llmlogs /var/log/syslog
llmlogs() {
  if [[ -z "$1" || ! -f "$1" ]]; then
    echo "Usage: llmlogs <log_filepath>"
    return 1
  fi
  tail -n 100 "$1" | llm "Analyze these log entries for errors, \
    warnings, or suspicious activity. Summarize findings."
}

6. Quick Command Help: Get plain-language help for a command, based on its man page.

1
2
3
4
5
6
7
8
9
10
# Usage: llmhelp find "how to find files modified today"
llmhelp() {
  if [[ -z "$1" || -z "$2" ]]; then
    echo "Usage: llmhelp <command_name> \"<question_about_command>\""
    return 1
  fi
  # Pass command name ($1) and question ($2) to the prompt
  man "$1" | llm "Based on the man page for '$1', explain \
    how to: $2. Provide a clear example command if possible."
}

This is just the beginning. Something to start thinking about is maybe define other aliases in your .bashrc or .zshrc file to automate and simplify your daily tasks.

Conclusion / Lessons Learned

Integrating LLMs directly into the terminal has dramatically changed how I work. By eliminating the context-switching between terminal and browser, I've found myself leveraging AI assistance much more frequently for small, everyday tasks.

The most surprising lesson has been how this approach fundamentally changes the relationship between CLI tools and AI. Rather than AI being a separate tool, it becomes a universal translator and analyzer for all your existing tools — their outputs become the context, and your questions become the interface.

A way to think of this is that:

  • The LLM can produce the content that is piped to other CLI tools.
  • The LLM can consume the output of other CLI tools, as a middle-man, and then pipe it's post-processed output to other CLI tools.
  • The LLM can be the final link in the chain, where other CLI tools are piped together and fed to the LLM for final processing.

This is such a powerful concept that I think we are only beginning to scratch the surface of. I hope this post inspires you to explore the possibilities of LLMs in your own terminal workflows.

Further Reading / References

This post is licensed under CC BY 4.0 by the author.