New: Tool Chainer — Build Text Transformation Pipelines
This guide has a free tool → Open Tool Chainer
# New: Tool Chainer - Build Text Transformation Pipelines
ToolBox just shipped Tool Chainer, a new kind of tool that lets you build multi-step text transformation pipelines right in your browser. This post explains what it does, why it exists, and how to get the most out of it from day one.
Try it now: Tool Chainer
---
The Problem Tool Chainer Solves
Text transformation is one of the most frequent low-level tasks in software development. You receive data in one form, you need it in another, and often multiple conversions are required to get there.
Before Tool Chainer, the flow looked like this:
- Open Base64 Decoder, decode your string, copy the result
- Open JSON Formatter, paste the decoded JSON, format it, copy
- Open a case converter, paste the formatted JSON, extract a specific field, convert it to UPPERCASE, copy
Each step is a separate navigation, a separate paste, a separate copy. If you misread a character at step two, you start over. If you close the tab, the context is gone. If you need to do the same transformation tomorrow, you rebuild it from scratch.
Tool Chainer collapses this sequence into a single pipeline. Define the steps once. Paste the input once. The output of step 1 flows into step 2 automatically, and so on down the chain. Close the tab and reopen it - your pipeline is still there. Share the pipeline with a colleague using a single URL.
---
Tool Chainer
Free online tool chainer - build text transformation pipelines by chaining processors together
Base64 Encoder/Decoder
Base64 encode and decode online - convert text to Base64 or decode Base64 strings instantly, free
JSON Formatter
JSON formatter and validator online - format, beautify, and validate JSON data instantly in your browser
How Tool Chainer Works
The Pipeline Model
A pipeline is an ordered list of processors. Text enters the top, passes through each processor in sequence, and exits the bottom as transformed output.
Input text
|
v
[ Step 1: Trim Whitespace ] output: "hello world"
|
v
[ Step 2: UPPERCASE ] output: "HELLO WORLD"
|
v
[ Step 3: Base64 Encode ] output: "SEVMTE8gV09STEQ="
|
v
Final Output: "SEVMTE8gV09STEQ="This is the same model as Unix shell pipelines (cmd1 | cmd2 | cmd3), implemented as a visual, interactive browser tool.
Adding Processors
Click "Add Step" to open the processor picker. Processors are grouped by category:
- Encoding - Base64 encode/decode, URL encode/decode, HTML entities
- Case - UPPERCASE, lowercase, Title Case, camelCase, PascalCase, snake_case, kebab-case, CONSTANT_CASE
- Format - JSON prettify/minify, trim whitespace, remove blank lines, sort lines, reverse lines, unique lines
- Transform - Reverse string, ROT13, text-to-binary, binary-to-text, Morse encode/decode
- Manipulate - Find & replace (regex supported), add prefix/suffix, number lines, wrap/join lines
- Hash - SHA-1, SHA-256, SHA-512
Click a processor name to add it as the next step in your chain.
Live Output at Every Step
Every step card shows:
- A preview of that step's output (truncated if long)
- The execution time in milliseconds
- An error message if that step failed
This is the key difference from a black-box pipeline. You can see what is happening at each stage, not just the final output. When something goes wrong, you see exactly which step produced the error and why.
Reordering Steps
Drag a step card up or down to change its position in the pipeline. The entire chain reruns immediately and all previews update.
Removing Steps
Click the X on any step card to remove that step. The chain reruns with the remaining steps.
---
35 Built-In Processors: A Closer Look
Encoding Category
The encoding processors handle the most common data encoding formats:
Base64 Encode / Base64 Decode
Base64 is used extensively in APIs, email attachments, and embedded assets. The encode processor converts any UTF-8 text to standard Base64. The decode processor reverses this, with useful error handling when the input is not valid Base64.
Input: hello world
Base64 Encode → aGVsbG8gd29ybGQ=
Base64 Decode → hello worldFor standalone Base64 operations, use Base64 Encoder/Decoder. In a chain, the Encode/Decode processors let you handle encoded data as part of a larger transformation.
URL Encode / URL Decode
Percent-encoding for query string parameters and URL components. The encoder handles all characters outside the unreserved set (A-Z, a-z, 0-9, -, _, ., ~).
Input: hello world & more
URL Encode → hello%20world%20%26%20moreHTML Entities Encode / Decode
Converts characters with special meaning in HTML (<, >, &, ", ') to their entity equivalents. Essential when generating HTML content from user input.
Input: <script>alert("xss")</script>
HTML Encode → <script>alert("xss")</script>Case Category
The case processors handle all the naming conventions used in different programming contexts:
| Processor | Input | Output | Use Context |
|---|---|---|---|
| UPPERCASE | hello world | HELLO WORLD | SQL keywords, constants |
| lowercase | Hello World | hello world | Normalizing input |
| Title Case | hello world | Hello World | Headings, labels |
| camelCase | hello world | helloWorld | JavaScript variables |
| PascalCase | hello world | HelloWorld | Class names, React components |
| snake_case | hello world | hello_world | Python, database columns |
| kebab-case | hello world | hello-world | CSS classes, URL slugs |
| CONSTANT_CASE | hello world | HELLO_WORLD | Environment variables |
Format Category
Format processors clean and restructure text without changing its semantic content:
JSON Prettify adds indentation and newlines to compact JSON, making it human-readable. JSON Minify removes all whitespace from JSON, reducing its size for transmission.
Trim Whitespace removes leading and trailing spaces and tabs from each line individually. This is different from a global trim - it processes every line.
Remove Blank Lines removes lines that are empty or contain only whitespace after trimming.
Sort Lines (ascending and descending) alphabetically sorts all lines. Combined with Unique Lines, this gives you a deduplicated, sorted set - equivalent to sort | uniq in a shell pipeline.
Reverse Lines reverses the order of lines (not the characters within lines - for that, use the Transform > Reverse String processor).
Unique Lines removes duplicate lines, keeping only the first occurrence of each.
Transform Category
Transform processors apply structural changes to text content:
Reverse String flips the entire text character by character: hello becomes olleh.
ROT13 applies the Caesar cipher with a shift of 13. Since there are 26 letters and ROT13 shifts by half, applying it twice returns the original. This makes it symmetric: the encode and decode operations are identical.
Text to Binary / Binary to Text converts between ASCII text and binary representation:
Input: Hi
Text to Binary → 01001000 01101001
Binary to Text → HiMorse Encode / Decode converts between Latin alphabet text and International Morse Code. Useful for educational purposes and for generating Morse patterns.
Manipulate Category
The Manipulate category contains the most flexible processors:
Find & Replace is the most powerful manipulate processor. It accepts a search pattern (with optional regex support) and a replacement string. Enable the "Use regex" toggle to use JavaScript regular expression syntax, including capture groups in the replacement string ($1, $2).
Pattern: (\w+)@(\w+)\.(\w+)
Replacement: [email redacted]
Input: Contact alice@example.com or bob@company.org
Output: Contact [email redacted] or [email redacted]Add Prefix / Add Suffix prepends or appends a fixed string to every line. Useful for generating code snippets, Markdown lists, SQL values lists, and more.
Number Lines adds a sequential number to the beginning of each line. Configure the start number and the separator string between the number and the line content.
Wrap Lines and Join Lines control how lines are broken and combined. Wrap Lines enforces a maximum line width by inserting line breaks at word boundaries. Join Lines collapses multiple lines into one, using a configurable separator.
Hash Category
Hash processors generate cryptographic digests using the browser's native Web Crypto API. No external library is required.
| Processor | Output | Security Level |
|---|---|---|
| SHA-1 | 40-char hex | Deprecated for security; checksums only |
| SHA-256 | 64-char hex | Current standard |
| SHA-512 | 128-char hex | Higher security margin |
Hash processors are typically terminal steps - the hex output is not meaningful input for most other processors.
---
Save and Share Pipelines
Share via URL
The Share button copies a URL encoding your input text and the complete pipeline configuration into the URL's query parameters. Open the URL in any browser, anywhere, and your exact pipeline is restored.
Example use cases:
- Include the URL in a code comment as documentation for a data processing step
- Share a debugging pipeline with a remote teammate without screensharing
- Link to an example in a blog post or documentation
Export / Import JSON
Export saves the pipeline configuration (not the input text) as a JSON file. The format is a simple array of step objects with an ID and optional configuration:
{
"version": 1,
"steps": [
{ "id": "trim-whitespace", "config": {} },
{ "id": "remove-blank-lines", "config": {} },
{ "id": "sort-lines", "config": { "direction": "asc" } },
{ "id": "unique-lines", "config": {} }
]
}Import loads a previously exported JSON file and rebuilds the pipeline. This is the recommended way to maintain a library of reusable pipelines - save the JSON files in your project repo or a dedicated tools/ directory.
Persistent State
localStorage automatically saves your current input and pipeline as you work. Closing the tab and returning to Tool Chainer restores everything. You can also use the Reset button to start fresh when you want a clean slate.
---
Five Pipelines to Get You Started
1. Log File Cleanup
For any log or output file with whitespace and duplicate entries:
- Trim Whitespace
- Remove Blank Lines
- Sort Lines
- Unique Lines
- Number Lines
2. Identifier Converter
For converting human-readable names to code identifiers:
- Trim Whitespace
- lowercase
- Find & Replace (pattern:
\s+, replacement:-) → kebab-case
OR Find & Replace (pattern: \s+, replacement: _) → snake_case
3. API Payload Inspector
For inspecting a Base64-encoded JSON payload from an API:
- Base64 Decode
- JSON Prettify
4. Secure Transmission Prep
For encoding a JSON payload for URL embedding:
- JSON Minify
- Base64 Encode
- URL Encode
5. Content Checksum
For generating a stable fingerprint of text content:
- Trim Whitespace
- Remove Blank Lines
- SHA-256
---
Privacy and Security
Tool Chainer processes all text locally in your browser. No input text, no intermediate outputs, and no pipeline configurations are sent to any server. The Web Crypto API used for hash operations is the browser's native cryptographic implementation - there is no third-party library involved.
This makes Tool Chainer appropriate for processing:
- Confidential API responses
- Internal configuration values
- Personally identifiable information that needs transformation
- Authentication tokens (for inspection purposes)
The only data that leaves your machine when using Tool Chainer is the page request to load the tool itself. After that, everything runs locally.
---
What Comes Next
Tool Chainer ships with 35 processors, but the processor list is designed to grow. Categories and processors under consideration for future additions:
- Crypto - AES encryption/decryption inline, HMAC-SHA-256 generation
- Text analysis - Word count, character count, line count as pass-through processors that display stats without modifying the text
- Format - XML prettify/minify, YAML prettify
- Conditional steps - Skip a step if the input matches a condition
If there is a processor you would like to see added, the feedback link in the site footer goes directly to the development backlog.
---
Real-World Developer Workflows
Tool Chainer fits into the workflows that developers repeat daily. Here are the most common patterns:
Backend API Development
When working with APIs that exchange data in multiple encoded formats, a typical request/response cycle involves several encoding and decoding steps. A debugging session might require:
- Decoding a Base64-encoded request body to see the raw payload
- Formatting the decoded JSON to inspect its structure
- Extracting and hashing a specific field value for comparison
Without Tool Chainer: open three separate tools, copy-paste between them, lose track of which output came from which step.
With Tool Chainer: build a Base64 Decode -> JSON Prettify pipeline once, paste any encoded request body, and get an instantly readable result every time.
Data Engineering Tasks
Data engineers frequently deal with messy inputs: CSVs with inconsistent quoting, log files with duplicate entries, configuration files with trailing whitespace. Before importing data into a database or processing it with a script, a cleanup pass is almost always needed.
Tool Chainer handles the text-level cleanup. For a list of unique, trimmed, sorted values from a raw dump:
- Trim Whitespace
- Remove Blank Lines
- Sort Lines
- Unique Lines
This is equivalent to the classic Unix pattern sort -u input.txt | sed 's/^[[:space:]]*//;s/[[:space:]]*$//' - but with a visual interface and no terminal required.
Frontend and CSS Work
Frontend developers converting design tokens from one format to another - for example, converting space-separated CSS custom property names to camelCase for a JavaScript object - can build a one-step pipeline:
Input: font size heading large
Add camelCase step
Output: fontSizeHeadingLarge
Or for SASS variable names with the $ prefix:
- Add Prefix:
$ - Find & Replace (
,-)
Input: primary color
Output: $primary-color
Documentation and Writing
Technical writers working with large blocks of text - API documentation, changelog entries, requirement lists - often need to normalize formatting before publishing. Common tasks:
- Removing trailing spaces (editors sometimes add them, linters always complain)
- Sorting a list of feature flags or configuration options alphabetically
- Adding a consistent prefix to all lines in a list
All of these are single-step or two-step Tool Chainer pipelines.
---
Comparing Tool Chainer to Alternative Approaches
vs. Separate Browser Tools (No Chainer)
| Aspect | Separate Tools | Tool Chainer |
|---|---|---|
| Steps required per operation | 1 per tool | 1 pipeline setup |
| Re-running the same transformation | Rebuild from scratch | Reload from localStorage |
| Sharing your workflow | Screenshot or description | Shareable URL |
| Seeing intermediate outputs | Only the final tool's output | Every step's output |
| Error diagnosis | Unclear which step failed | Halts at the failing step |
vs. Shell Scripting
Shell pipelines are more powerful - any command-line tool can be a step. But they require:
- A terminal and shell environment
- Knowledge of command syntax (sed, awk, tr, etc.)
- Setup on each machine you work from
- Additional steps to handle Windows/macOS syntax differences
Tool Chainer trades flexibility for accessibility. For text transformations that fit within its 35 processors, it is faster to set up, easier to share, and works anywhere a modern browser runs.
vs. Custom Scripts
Writing a Python or Node.js script for a one-off text transformation is often overkill. It requires setting up a file, running the interpreter, handling input/output. For quick, ad-hoc work, that overhead is not worth it.
Tool Chainer is the right tool for ad-hoc text transformation: fast to set up, no environment to configure, instant results. When a transformation becomes a production process that runs repeatedly and automatically, move it to a proper script.
---
The Technical Architecture
Why Browser-Native?
Tool Chainer runs entirely in the browser with no server component because:
- Privacy - Your text never leaves your machine. For developers working with production data, API keys, authentication tokens, and internal configurations, server-side processing is a security risk that should be avoided when possible.
- Speed - No network latency. Text processing in JavaScript on a modern device takes milliseconds. A round-trip to a server would add 50-500ms of latency per step.
- Availability - No server means no downtime. Tool Chainer is available even if ToolBox's CDN is temporarily unreachable (because it is installable as a PWA that caches its assets).
- Zero infrastructure cost - No compute resources consumed processing user text server-side.
Web Crypto API for Hashing
The hash processors (SHA-1, SHA-256, SHA-512) use the browser's native crypto.subtle API rather than a JavaScript implementation:
async function sha256(text) {
const encoder = new TextEncoder();
const data = encoder.encode(text);
const hashBuffer = await crypto.subtle.digest('SHA-256', data);
const hashArray = Array.from(new Uint8Array(hashBuffer));
return hashArray.map(b => b.toString(16).padStart(2, '0')).join('');
}This is the same cryptographic implementation your browser uses for HTTPS. It is fast, audited by browser vendors, and does not require loading an external library.
Pipeline State in localStorage
The current pipeline state (steps and input) is serialized to localStorage on every change:
{
"input": "your input text here",
"steps": [
{ "id": "trim-whitespace", "config": {} },
{ "id": "base64-encode", "config": {} }
]
}On page load, this state is deserialized and the pipeline is reconstructed. This makes persistence seamless - there is no save button because persistence is automatic.
---
Frequently Asked Questions
Q: Does Tool Chainer replace the individual encoding and formatting tools in ToolBox?
A: No. The individual tools (Base64 Encoder/Decoder, JSON Formatter, URL Encoder/Decoder, etc.) remain available and are often more convenient for single-step operations. They also have additional features - for example, the JSON Formatter has a schema validator and tree view. Tool Chainer is the right choice when you need multiple steps in sequence.
Q: Can I use Tool Chainer with a file as input rather than pasted text?
A: Tool Chainer accepts text input in the input area. For file-based input, copy the file contents and paste them. For very large files (above a few megabytes), this may be impractical - use a shell pipeline or a dedicated scripting tool for large file processing.
Q: Is there a way to run a Tool Chainer pipeline programmatically?
A: Not currently from the UI. If you need to apply the same transformations in a CI/CD pipeline or automation script, the Tool Chainer JSON export documents the steps, which you can reimplement in shell or Python. Future versions may offer a CLI mode or an API.
Q: What happens if I paste sensitive data into Tool Chainer?
A: The data is processed in your browser and never sent to a server. It is also stored in localStorage as part of the persistent state feature. If you are processing sensitive data (passwords, tokens, PII), clear the input field and use the Reset button after your session, or open Tool Chainer in an incognito/private window where localStorage is cleared automatically on close.
Q: Can I share a pipeline without including the input text?
A: The Export JSON feature saves only the pipeline steps, not the input text. Share the exported JSON file to share the pipeline configuration. The Share URL does include the input text. If you want to share the pipeline structure without the data, use Export and share the file.
---
Getting the Most From Tool Chainer
Build a Library of Saved Pipelines
As you use Tool Chainer, export the pipelines you find useful and name them descriptively. A small library of five to ten commonly used pipelines - log cleanup, identifier conversion, payload inspection - will save time repeatedly.
Store these JSON files in a directory your team shares (a tools/ folder in your project repo, a shared drive, or a team wiki). Anyone can import them into their own Tool Chainer session.
Use the Load Sample Button First
If you are new to Tool Chainer, the Load Sample button populates a demo pipeline with a realistic input. Watching the demo pipeline run - and then modifying it step by step - is faster than reading documentation.
Work From the Bottom Up When Debugging
If a pipeline produces wrong output, remove steps from the bottom until the output makes sense, then add them back one at a time. Each step shows its own output in the card preview, so you can see exactly where the wrong transformation is happening.
Combine With Other ToolBox Tools
Tool Chainer handles text-level transformation. For higher-level format conversions, start in a dedicated tool and bring the result into Tool Chainer:
- Convert CSV to JSON first using CSV to JSON, then process the JSON in Tool Chainer
- Generate a UUID using UUID Generator and paste it into Tool Chainer to encode or hash it
- Decode a JWT using JWT Decoder and bring the payload JSON into Tool Chainer for further processing
---
Try It
Open Tool Chainer and click "Load Sample" to see a demo pipeline running. Then clear it, paste your own text, and build a chain that fits your workflow.
Every processor runs in your browser, nothing is sent anywhere, and your pipeline persists across sessions. If you build something useful, share it with the URL sharing feature - one click, no login.
That brings ToolBox to 139 free tools. All client-side, all private, all free.
The pipeline you build today is the one you will reuse tomorrow - and the Export feature means it travels with you wherever you work.
Related Tools
Free, private, no signup required
Cron Expression Parser
Free online cron expression parser - parse cron expressions and see the next scheduled run times
Crontab Generator
Free online crontab generator - build cron expressions visually with an intuitive interface
Regex Tester
Free online regex tester - test and debug regular expressions with live matching and highlights
You might also like
Want higher limits, batch processing, and AI tools?