Recent posts

#1
To really use Paperclip with the Sindbyte MCP-Server, you need to download the newest version.
And use "Open Code" and connect Open Code NOT to MCP-Server but to the "OpenAI compatible Endpoint" that is also built into the SindByte MCP-Server.
Doing so, "Open Code" can directly use all MCP-Server Tools and can use LM-Studio - the loaded Model.
And this way Open Code - and therefore "Paperclip" can organize a "Trading Company" for example with KRAKEN,
where "virtual Employee's" can do (Paper-Trading is supported") trade with Paper- or real money and try to gain a win.

You could tell the "virtual CEO" to employ diffrent sorts of Traders, haven them do Paper Trading with their system
and if they do not win a lot to fire them.

And those that do a good Job, let them trade with your real account.

Technically this is possible, you need the newest version of "SindByte" which is 1.9.05 for that.
It has the Endpoint-compatibility with Open-Code.

Using the new Qwen 3.6 as local Model, Open Code can do a lot using LM-Studio. And its alltogether FREE.

#2
OxygenBasic / Re: OxygenBasic PreRelease
Last post by Theo Gottwald - Yesterday at 10:53:44 PM
@Charles Pegge
I am willing to help but you need to explain what i can do for you.
I understand that you not want KI-Code as your own Context exceeds the 1 MB Limit a lot.

Anyway if you ask me to hunt for more errors, you can do that any time.
But i need then a version which you say is "bug free as you know".

Because if i get something that contains bugs, we will find these and find follow-up bugs,
is a waste of time. So make it bug free and then tell me and send me the most actual version.

As said i can also always digitally sign the Executables for you this will help them a lot on todays window computers.

#3
OxygenBasic / Re: OxygenBasic PreRelease
Last post by Charles Pegge - Yesterday at 04:43:00 PM
I have dealt with the 12 issues raised by KI Agent A, so far. They are easy to understand and mostly practical to implement across multiple demos and tools. But I would not risk using agent generated code at this stage.

AI KI AGENT A FIXES
08:17 17/04/2026 A12: Create co2.inc for code common to co2 and co2m64
08:10 17/04/2026 A11: Disagree oxideutil.inc findreplace
07:52 17/04/2026 A10: More efficient lcase(), in SearchString (searchutil.inc
07:37 17/04/2026 A9:  Extend filenamelen to 512, but stay with ansi filenames
01:26 17/04/2026 A8:  Static array hfont[4]... DeleteObjecthfont[1] etc
00:30 17/04/2026 A7:  Restructure CompilerInfo() (OxideUtil.inc)
00:05 17/04/2026 A6:  Comment out md expression in Oxide
23:52 16/04/2026 A5:  Disagree Peroxide file saving
21:40 16/04/2026 A4:  Fix text line buffer size s=nuls 512
21:29 16/04/2026 A3:  Fix FindFirstFile : if h>0
14:02 16/04/2026 A2:  Fix return codes from exec() etc. (sysutil.inc)
13:52 16/04/2026 A1:  Make co2 and co2m64 compilers commandlinecase-sensitive (tools\co2*)
#4
## A powerful local-first AI company stack for serious automation 🤖

Paperclip AI is one of the most interesting new tools for organizing AI agents into a real managed structure. Combined with OpenCode CLI, LM Studio, and Qwen 3.6, it opens the door to a serious local-first coding and automation workflow. 🤖💻

Here are some comments, a detailled description is below.

1. To use this combination you NEED the most actual LM-Studio 0.4.12 else the Computer may hang and even crash.
This is especially due to the fact that the parallel-processing option is heavily used.

LMS_02.png

2. The new  local Qwen 3.6 Model is perfect for such tasks, using a coding plan for this type of app is possibly expensive and the results are possibly questionable. I have it now running for a while and after several errors that needed to be fixed first (the program is new), my impression is that the fictional Company does not yet manage to organize themselves properly. Yet consuming a lot of Tokens.

PClip.png

3. The usable Contextsize with the new Qwen 3.6 Model is in total 262K Token.
In Multiprocessing - as we see on the pictures - LM-Studio will share the Context-Size between the processes.
And if the Paperclip runs 4 processes at the same time, its possible the 1/4 of 262000 is just not enough, see below.

PPC3.png

4. Is it a good idea?
Generally yes,, its a good idea.
I had that Idea some time ago, but could not make the implementation.
If this implementation is good enough, is another question.
I will let it run and use my local AI and see if it can do something useful.

5. Used together with the SindByte MCP-Server it could employ a number of virtual agents that could do trades on Kraken (just example9 and each of these could try to earn more then the other agents.
This could be an interesting to test combination later.

6. This program combination seems to work permanent (depending on Heartbeat (its a Timer)-Settings).
You ever wanted your computer to at least "do something" even when you sleep?
This is it. However if connected with a "Coding Plan" or API it will burn money from morning to evening.
So my recommendation is to only use this with Local AI unless you got it really do something that is worth the result.

Details.
If you have been looking for a way to move beyond "just chatting with an AI" and toward a **structured, controllable multi-agent workflow**, the new **Paperclip AI** stack is one of the most interesting developments right now. Paperclip is positioned as an **open-source orchestration platform for AI agent teams**, with a Node.js server and React UI that lets you organize agents via org charts, budgets, governance, tickets, and audit trails. In short: it is not just another single-agent assistant — it is designed to coordinate multiple agents toward business or project goals. ([GitHub][1])

## 🧠 What Paperclip actually is

Paperclip's core idea is simple but important: instead of running isolated AI terminals and losing track of who is doing what, it gives you a **central control layer**. According to its project pages, it supports concepts such as:

* **Bring your own agent**
* **Goal alignment**
* **Heartbeat-based wakeups**
* **Cost controls / monthly budgets**
* **Ticketing and audit logging**
* **Governance / approvals**
* **Org charts and role hierarchy**
* **Multi-company separation** ([GitHub][1])

That means Paperclip is best understood as a **management and orchestration shell** around other AI workers. It does not replace your coding agent — it coordinates and supervises it. This is especially useful when you want a CEO/CTO/developer/researcher style structure instead of a single monolithic assistant. ([GitHub][1])

## 💻 Why OpenCode CLI fits so well

This is where **OpenCode CLI** becomes very interesting. OpenCode is an **open-source coding agent** with both a TUI and CLI workflow, and it can be run interactively or programmatically. Its CLI supports commands such as `run`, `agent`, `models`, `mcp`, `serve`, `session`, `web`, and more, which makes it highly suitable as a worker engine inside a larger orchestration system. ([OpenCode][2])

In practical terms, OpenCode is a strong fit for Paperclip because:

* it already behaves like a terminal-native coding agent,
* it supports configurable agents,
* it can be driven in scripted or backend-style workflows,
* and it is designed to connect to different model providers. ([OpenCode][2])

So the combination looks like this:

* **Paperclip** = the management layer 🧾
* **OpenCode CLI** = the coding worker / execution layer 🛠�
* **LM Studio** = the local inference backend 🖥�
* **Qwen 3.6** = the model brain 🧠

That architecture is one of the cleanest current local-first setups for people who want real agent coordination rather than a single prompt box.

## 🏠 Why LM Studio is the key local piece

**LM Studio** is the part that makes the setup much more attractive for privacy-focused or hardware-heavy users. LM Studio explicitly positions itself as a way to **run AI models locally and privately**, with support for local hardware, an OpenAI-compatible API, and even a **headless deployment mode** via `llmster`. It also exposes developer resources, SDKs, and a CLI (`lms`). ([LM Studio][3])

This matters because OpenCode and similar coding tools work well when they can point to an **OpenAI-compatible local endpoint**. LM Studio provides exactly that. It also now promotes **LM Link**, which allows remote LM Studio instances to be used as if they were local; LM Studio explicitly says that tools already targeting the local LM Studio server can use LM Link models as well, including tools like **OpenCode**. ([LM Studio][3])

So if your goal is:

* **local inference**
* **less cloud dependency**
* **better privacy**
* **use of your own GPU hardware**
* **OpenAI-compatible access for agent tools**

then LM Studio is one of the most practical bridges currently available. ([LM Studio][3])

## 🔥 And now comes Qwen 3.6

The newest major model in this chain is **Qwen 3.6**. Qwen officially announced **Qwen3.6-Plus** on **April 1, 2026**, describing it as a model aimed at **real-world agents** with improvements in **coding agents, general agents, and tool usage**, specifically through tighter integration of reasoning, memory, and execution. ([Qwen][4])

Even more interesting for local users: just days later, Qwen also released the first **open-weight** Qwen 3.6 variant, **Qwen3.6-35B-A3B**, describing it as a sparse MoE model optimized for **stability** and **real-world utility**, with an emphasis on a more productive coding experience shaped by community feedback. ([Hugging Face][5])

That makes Qwen 3.6 particularly relevant for this stack for several reasons:

### ✅ Why Qwen 3.6 is a strong match

* It is being positioned for **agentic workflows**, not just plain chat. ([Qwen][4])
* It has a strong emphasis on **coding and terminal-style execution tasks**. ([Qwen][4])
* There is now an **open-weight 35B-A3B** release, making local deployment much more realistic than relying only on a hosted flagship model. ([Hugging Face][5])
* LM Studio already advertises support for the **Qwen3** family among local models. ([LM Studio][3])

For people building a private coding stack, this is a big deal: you can combine **Paperclip orchestration**, **OpenCode execution**, **LM Studio serving**, and **Qwen 3.6 reasoning** into a system that is much closer to a real AI operations environment than a normal chatbot setup.

## 🧩 Why this combo is exciting

What makes this setup stand out is not any single component by itself, but the way the pieces complement each other:

### 1. Paperclip adds structure 📋

Without orchestration, multiple AI tools quickly become chaos. Paperclip adds hierarchy, tasks, budgets, and traceability. ([GitHub][1])

### 2. OpenCode adds practical coding muscle 🛠�

OpenCode is not merely a static chat UI. It is a terminal-centric coding agent with explicit CLI workflows and backend attach/serve options. ([OpenCode][6])

### 3. LM Studio adds local control 🔒

LM Studio provides the model serving layer on your own machine, with OpenAI-compatible access and headless/server deployment options. ([LM Studio][3])

### 4. Qwen 3.6 adds a more agent-oriented brain 🧠

Qwen 3.6 is explicitly being presented as stronger at agentic coding, tool use, and execution-oriented tasks than earlier generations. ([Qwen][4])

## 🛠� Example usage scenario

A very realistic setup could look like this:

* **Paperclip CEO** receives the high-level business or development goal
* **Paperclip CTO / engineer agents** use **OpenCode CLI**
* OpenCode connects to **LM Studio**
* LM Studio runs **Qwen 3.6** locally
* Paperclip tracks tasks, approvals, budgets, and audit logs

This gives you a workflow where the AI is no longer just "answering questions," but instead:

* planning work,
* assigning work,
* executing coding tasks,
* reviewing outputs,
* and maintaining organizational context over time.

That is exactly the kind of setup many advanced users have wanted for local AI for quite a while.

## ⚠️ What to keep in mind

This stack is powerful, but it is not magic.

* **Paperclip** adds orchestration, but that also means more moving parts. ([GitHub][7])
* **OpenCode** is flexible, but agent workflows still depend heavily on good model behavior and solid tool integration. ([OpenCode][6])
* **LM Studio** makes local inference easy compared to raw server tooling, but you still need enough hardware for the model size you choose. ([LM Studio][3])
* **Qwen 3.6-Plus** is a hosted flagship-class model, while **Qwen3.6-35B-A3B** is the newly released open-weight option better suited to local deployment; these are not the same thing, so users should choose based on hardware and goals. ([Qwen][4])

## ✅ Bottom line

For anyone interested in **local AI agents that do real work**, this is one of the most compelling current combinations:

**Paperclip AI** gives you the org chart and control plane.
**OpenCode CLI** gives you the coding agent runtime.
**LM Studio** gives you the private local model server.
**Qwen 3.6** gives you a modern agent-oriented reasoning model.

Put together, this creates a serious foundation for **local-first autonomous coding teams** rather than a single isolated assistant. 🚀

If the ecosystem continues to mature, this kind of stack could become a very attractive alternative to cloud-only agent workflows — especially for developers, researchers, and small teams who want **privacy, cost control, and direct ownership of their AI infrastructure**. 🔥

[1]: https://github.com/paperclipai/paperclip "GitHub - paperclipai/paperclip: Open-source orchestration for zero-human companies · GitHub"
[2]: https://opencode.ai/?utm_source=chatgpt.com "OpenCode | The open source AI coding agent"
[3]: https://lmstudio.ai/ "LM Studio - Local AI on your computer"
[4]: https://qwen.ai/blog?id=qwen3.6&utm_source=chatgpt.com "Qwen3.6-Plus: Towards Real World Agents"
[5]: https://huggingface.co/Qwen/Qwen3.6-35B-A3B?utm_source=chatgpt.com "Qwen/Qwen3.6-35B-A3B"
[6]: https://opencode.ai/docs/cli/ "CLI | OpenCode"
[7]: https://github.com/paperclipai "Paperclip · GitHub"
#5
Project Progress and Learning / Re: News 16.04.2026
Last post by Theo Gottwald - April 16, 2026, 07:27:46 PM
𝔚𝔥𝔢𝔫 𝔞 𝔪𝔞𝔰𝔰𝔦𝔳𝔢 𝔢𝔩𝔢𝔠𝔱𝔯𝔦𝔠 𝔠𝔯𝔞𝔫𝔢 𝔯𝔢𝔭𝔩𝔞𝔠𝔢𝔡 𝔱𝔥𝔢 𝔡𝔬𝔠𝔨𝔴𝔬𝔯𝔨𝔢𝔯𝔰 𝔞𝔱 𝔅𝔬𝔪𝔟𝔞𝔶'𝔰 𝔡𝔬𝔠𝔨𝔰 𝔡𝔢𝔠𝔞𝔡𝔢𝔰 𝔞𝔤𝔬, 𝔦𝔱 𝔴𝔞𝔰 𝔱𝔥𝔢 𝔣𝔦𝔯𝔰𝔱 𝔰𝔦𝔤𝔫 𝔬𝔣 𝔞 𝔰𝔥𝔦𝔣𝔱 𝔱𝔥𝔞𝔱 𝔰𝔱𝔦𝔩𝔩 𝔢𝔠𝔥𝔬𝔢𝔰 𝔱𝔬𝔡𝔞𝔶—**𝔭𝔯𝔬𝔤𝔯𝔞𝔪𝔪𝔦𝔫𝔤 𝔟𝔢𝔦𝔫𝔤 𝔬𝔳𝔢𝔯𝔱𝔞𝔨𝔢𝔫 𝔟𝔶 𝔄ℑ** 🚀 

𝔅𝔞𝔠𝔨 𝔱𝔥𝔢𝔫 𝔰𝔱𝔯𝔢𝔫𝔤𝔱𝔥 𝔪𝔞𝔱𝔱𝔢𝔯𝔢𝔡: 𝔬𝔫𝔢 𝔪𝔞𝔫 𝔠𝔬𝔲𝔩𝔡 𝔥𝔞𝔲𝔩 25–100 𝔥𝔢𝔞𝔳𝔶 𝔭𝔞𝔠𝔨𝔞𝔤𝔢𝔰 𝔭𝔢𝔯 𝔡𝔞𝔶. 
𝔗𝔥𝔢𝔫 𝔠𝔞𝔪𝔢 𝔱𝔥𝔢 𝔠𝔯𝔞𝔫𝔢𝔰—𝔣𝔦𝔯𝔰𝔱 𝔬𝔫𝔢, 𝔱𝔥𝔢𝔫 𝔪𝔞𝔫𝔶—𝔞𝔫𝔡 𝔱𝔥𝔢 𝔡𝔬𝔠𝔨𝔴𝔬𝔯𝔨𝔢𝔯𝔰 𝔴𝔢𝔯𝔢 𝔩𝔢𝔣𝔱 𝔬𝔫 𝔱𝔥𝔢 𝔭𝔦𝔢𝔯 𝔴𝔬𝔫𝔡𝔢𝔯𝔦𝔫𝔤 𝔴𝔥𝔬 𝔰𝔱𝔦𝔩𝔩 𝔫𝔢𝔢𝔡𝔢𝔡 𝔱𝔥𝔢𝔪 🤔 

𝔗𝔬𝔡𝔞𝔶, 𝔠𝔬𝔡𝔢𝔯𝔰 𝔴𝔥𝔬 𝔬𝔫𝔠𝔢 𝔱𝔥𝔯𝔦𝔳𝔢𝔡 𝔞𝔯𝔢 𝔣𝔦𝔫𝔡𝔦𝔫𝔤 𝔫𝔢𝔴 𝔯𝔬𝔩𝔢𝔰 𝔴𝔥𝔢𝔯𝔢 𝔪𝔲𝔰𝔠𝔩𝔢 𝔦𝔰𝔫'𝔱 𝔞 𝔯𝔢𝔮𝔲𝔦𝔯𝔢𝔪𝔢𝔫𝔱. 
𝔗𝔥𝔢 𝔪𝔬𝔡𝔢𝔯𝔫 𝔠𝔯𝔞𝔫𝔢 𝔠𝔞𝔫 𝔩𝔦𝔣𝔱 𝔩𝔬𝔞𝔡𝔰 𝔴𝔢'𝔡 𝔫𝔢𝔳𝔢𝔯 𝔪𝔞𝔫𝔞𝔤𝔢 𝔟𝔶 𝔥𝔞𝔫𝔡, 𝔞𝔫𝔡 ℑ 𝔰𝔢𝔢 𝔱𝔥𝔞𝔱 𝔭𝔬𝔴𝔢𝔯 𝔤𝔯𝔬𝔴𝔦𝔫𝔤 𝔢𝔳𝔢𝔯𝔶 𝔡𝔞𝔶 💪🌐

#𝔄𝔲𝔱𝔬𝔪𝔞𝔱𝔦𝔬𝔫 #𝔄ℑℜ𝔢𝔳𝔬𝔩𝔲𝔱𝔦𝔬𝔫 #𝔉𝔲𝔱𝔲𝔯𝔢𝔒𝔣𝔚𝔬𝔯𝔨 #𝔗𝔢𝔠𝔥𝔖𝔥𝔦𝔣𝔱 #ℑ𝔫𝔡𝔲𝔰𝔱𝔯𝔶40 🌍🤖💻🛠�📈
#6
OxygenBasic / Re: OxygenBasic PreRelease
Last post by Theo Gottwald - April 16, 2026, 07:18:25 PM
@Charles Pegge
Are you interested in Bug reports also, or is it too early and you have still lots to do?
When you think its "Bug free" then tell me so i can test it for you.
Let me add that i am willing to digitally sign your executables if you ask me to do so.
Unsigned executables often make lots of troubles under windows.

In average i find 25 bugs if people tell me their program is bug free. ;D
#7
OxygenBasic / OxygenBasic PreRelease
Last post by Charles Pegge - April 16, 2026, 06:52:24 PM
I will be posting these frequently for anyone with a special interest in the new features or bug fixes.
Older versions will be removed as soon as there is a fresh update.

Included:

re-introduced 'embed' command for inserting file contents directly into the code binary.



recommended by AI agents KI:

case-sensitive compilers command lines, including the switches. Currently all lower-case.

fixed exit codes returned by exec(exefile,1)  in sysutil.inc

#8
OxygenBasic Examples / Re: Parser formula
Last post by Nicola - April 16, 2026, 11:59:00 AM
Hi Theo,
in your code there are errors in the setvar and getvar functions.
Support variables must be used.


function getVar(string nn) as double
    int i
   string name=nn

    name = normalizeName(name)

    for i = 0 to varCount-1
        if varName[i] = name then return varValue[i]
    next

    string msg = "Unknown variable: "
    msg += name
    setError(msg)
    return 0
end function
#9
OxygenBasic Examples / Re: Parser formula
Last post by Nicola - April 16, 2026, 11:00:08 AM
'parser
'Nicola Piano con aiuto di copilot
'15-04-2026
'Modificato

use console

indexbase 0

' ============================
'   GLOBALI
' ============================
string expr
sys pos

string varName[20]
double varValue[20]
int varCount = 0

declare function parseExpression() as double
'declare function parseTerm() as double
'declare function parsePower() as double
'declare function parseFactor() as double

' ============================
'   VARIABILI
' ============================
sub setVar(string name, double value)
    varName[varCount]  = name
    varValue[varCount] = value
    varCount++
end sub

function getVar(string name) as double
    int i
    for i = 0 to varCount-1
        if varName[i] = name then return varValue[i]
    next
    print "Unknown variable: " name
    return 0
end function

' ============================
'   LETTURA CARATTERI
' ============================
function peek() as byte
    if pos < len(expr) then
        return asc(mid(expr, pos+1, 1))
    else
        return 0
    end if
end function

sub advance()
    pos++
end sub

sub skipSpaces()
    while peek() = 32 or peek() = 9 or peek() = 160
        advance()
    wend
end sub

' ============================
'   PARSER
' ============================

function parseFactor() as double
    skipSpaces()

    ' --- UNARIO + e - ---
    if peek() = asc("-") then
        advance()
        return -parseFactor()
    elseif peek() = asc("+") then
        advance()
        return parseFactor()
    end if

    ' parentesi
    if peek() = asc("(") then
        advance()
        double v = parseExpression()
        skipSpaces()
        if peek() = asc(")") then advance()
        return v
    end if

    ' numero
    if peek() >= asc("0") and peek() <= asc("9") then
        double v = val(mid(expr, pos+1))
        while (peek() >= asc("0") and peek() <= asc("9")) or peek() = asc(".")
            advance()
        wend
        return v
    end if

    ' variabile
    if (peek() >= asc("a") and peek() <= asc("z")) or _
       (peek() >= asc("A") and peek() <= asc("Z")) then

        string name = ""
        while (peek() >= asc("a") and peek() <= asc("z")) or _
              (peek() >= asc("A") and peek() <= asc("Z"))
            name += chr(peek())
            advance()
        wend
        return getVar(name)
    end if

    return 0
end function

function parsePower() as double
    double base = parseFactor()
    skipSpaces()

    ' ^ destra-associativo: 2^3^2 = 2^(3^2)
    if peek() = asc("^") then
        advance()
        double exp = parsePower()
        base = base ^ exp
    end if

    return base
end function


function parseTerm() as double
    double v = parsePower()
    skipSpaces()

    while peek() = asc("*") or peek() = asc("/")
        byte op = peek()
        advance()
        double v2 = parsePower()

        if op = asc("*") then v = v * v2
        if op = asc("/") then v = v / v2

        skipSpaces()
    wend

    return v
end function

function parseExpression() as double
    double v = parseTerm()
    skipSpaces()

    while peek() = asc("+") or peek() = asc("-")
        byte op = peek()
        advance()
        double v2 = parseTerm()

        if op = asc("+") then v = v + v2
        if op = asc("-") then v = v - v2

        skipSpaces()
    wend

    return v
end function

' ============================
'   WRAPPER EVAL
' ============================
function eval(string s) as double
    expr = s
    pos = 0
    return parseExpression()
end function

sub calc(string s)
printl "Expressione: " s "  ----> Result: " eval(s)
end sub

' ============================
'   TEST
' ============================
setVar("b", 2)
setVar("c", 5)
setVar("d", 7)

calc("341 -((b+c)*d^2 - 3)")
calc("(b+c)*d^2 - 3)")
calc("-3+2")
calc("2^3^2")
calc("-2^3^2")
wait
#10
Project Progress and Learning / News 16.04.2026
Last post by Theo Gottwald - April 16, 2026, 08:17:00 AM
Test show that the SindByte MCP-Server still has some quirks that need to be adressed.

2026-04-16 08_03_18-Greenshot.png

2026-04-16 08_03_03-SindByte MCP Server v1.9.04.png

Then there is another task.
The CX64-Editor should also support other compilers, which is currently the good old legacy Powerbasic Compier.
But can be in the future also Jürgens new Compiler.

As these Compilers have no commandline Switches to emit their compilers generated ASM-Code,
the the "Intermediate Stages Sidebar" would not work with PB-legacy Compiler.

Addressing this, another Agent is working on implanting an Disassembler into the Editor so it can also display "Disasm" for fresh compiled "legacy" Powerbasic Programs.

The reason fo this is that the C32 and CX64 Compilers do ouput their "Intermediate Stage ASM Code" but Powerbasic doesn't.

Yet, i like that feature, to see which ASM-COde is generated using which commands. So it will be implemented.

You may ask "This thing can decompile a Powerbasic EXE or DLL?". The answer is "I did not yet see the results.".
From my actual standpoint, I would not recommend to put all personal life-secrets into a compiled Executable at this time.

Currently I did not add an Option to decompile other programs then the program that is inside the Editor.
But of course technically there is no difference for that.

As the Patterns modern compiler generate are often equal (Increase a variable,, make a loop, prepare stack for a function, call an API etc.)
it may even decompile other Sourcecode that is not PB into Powerbasic Code. Which i did not test.

2026-04-16 08_05_26-Start new topic — Originalprofil — Mozilla Firefox.png