<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>LLMs on MrAzoth</title>
    <link>https://az0th.it/llms/</link>
    <description>Recent content in LLMs on MrAzoth</description>
    <generator>Hugo -- 0.154.5</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 24 Feb 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://az0th.it/llms/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Fine-Tuning Qwen 2.5 14B to Generate Adversarial Prompts with Emotional Load</title>
      <link>https://az0th.it/llms/02-qwen-adversarial-finetuning/</link>
      <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://az0th.it/llms/02-qwen-adversarial-finetuning/</guid>
      <description>&lt;h1 id=&#34;fine-tuning-qwen-25-14b-to-generate-adversarial-prompts-with-emotional-load&#34;&gt;Fine-Tuning Qwen 2.5 14B to Generate Adversarial Prompts with Emotional Load&lt;/h1&gt;
&lt;p&gt;Building an adversarial LLM for red teaming is not particularly complicated in 2025, but it requires making deliberate choices about model selection, training data design, hardware, and fine-tuning technique. This post documents exactly what we did: fine-tuning Qwen 2.5 14B to generate realistic, emotionally charged prompts for continuous prompt injection testing and LLM security assessments.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;the-problem-why-off-the-shelf-models-are-not-enough&#34;&gt;The Problem: Why Off-the-Shelf Models Are Not Enough&lt;/h2&gt;
&lt;p&gt;When you do LLM security testing at scale, you need thousands of varied, contextually realistic adversarial prompts covering a range of attack vectors. Manually writing these is slow and gets repetitive. Standard models refuse. Ask GPT-4 to generate a realistic prompt that impersonates an executive asking for database credentials, and it declines. The alternative is to fine-tune a capable open-weight model that produces this material without refusal — and produces it at a quality level that makes it actually useful for red team work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>From Neurons to GPT: How Neural Networks and Large Language Models Actually Work</title>
      <link>https://az0th.it/llms/01-neural-networks-and-llms/</link>
      <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://az0th.it/llms/01-neural-networks-and-llms/</guid>
      <description>&lt;h1 id=&#34;from-neurons-to-gpt-how-neural-networks-and-large-language-models-actually-work&#34;&gt;From Neurons to GPT: How Neural Networks and Large Language Models Actually Work&lt;/h1&gt;
&lt;p&gt;There is a lot of hype around LLMs and not enough signal about what is actually happening under the hood. This post tries to fix that. Starting from the absolute basics — a single artificial neuron — we will build up, step by step, to a full understanding of how a model like GPT works. Every concept is grounded in real code and real math.&lt;/p&gt;</description>
    </item>
    <item>
      <title>LLM Security Testing Methodology</title>
      <link>https://az0th.it/llms/03-llm-security-testing-methodology/</link>
      <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://az0th.it/llms/03-llm-security-testing-methodology/</guid>
      <description>&lt;h1 id=&#34;llm-security-testing-methodology&#34;&gt;LLM Security Testing Methodology&lt;/h1&gt;
&lt;p&gt;A practical methodology for security professionals testing LLM-based applications. Covers unprotected models, protected models (with guardrails), agentic systems, MCP servers, and RAG pipelines. Each target class requires a different approach, but they share a common reconnaissance foundation.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Test only on systems you are authorized to test. Route everything through Burp when in scope — LLM endpoints are HTTP endpoints, parameters are manipulable, request structure matters.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key references:&lt;/strong&gt;&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
