Building Text Transformation Pipelines with Tool Chainer
This guide has a free tool → Open Tool Chainer
# Building Text Transformation Pipelines with Tool Chainer
The Tool Chainer lets you chain multiple text processors into a single, reusable pipeline. Instead of copying output from one tool and pasting it into the next, you define the steps once and your text flows through all of them automatically.
This guide walks through real-world examples from start to finish, covers every processor category in depth, and shows you how to design pipelines for the tasks that come up most in development and data work.
---
What Is a Text Pipeline?
A text pipeline is a sequence of transformations applied to a string in a fixed order, where the output of each step becomes the input to the next. Pipelines are a fundamental Unix concept - the | pipe operator in shell scripting does exactly this - but they are typically unavailable in browser-based tools.
Tool Chainer brings pipeline composition to the browser with a visual, interactive interface. You do not need to write shell commands or code. You pick processors from a list, arrange them in order, and paste your input text. The transformation chain runs instantly in your browser with no server involved.
Why Pipelines Over Individual Tools?
Consider a common data processing task: you have a raw API response with some fields base64-encoded, you want to format the JSON for readability, convert a specific field value to uppercase, then hash the result for logging purposes. With individual tools, that is:
- Open JSON Formatter, paste, format, copy output
- Open Base64 Decoder, paste the encoded field value, copy decoded output
- Open Case Converter, paste, convert to uppercase, copy
- Open Hash Generator, paste, generate SHA-256, copy
Four separate tool loads, four copy-paste steps. Tool Chainer collapses this into a single pipeline that you define once.
---
Tool Chainer
Free online tool chainer - build text transformation pipelines by chaining processors together
Base64 Encoder/Decoder
Base64 encode and decode online - convert text to Base64 or decode Base64 strings instantly, free
CSV to JSON Converter
Free online CSV to JSON converter - convert CSV data to JSON format with delimiter options and preview
The Tool Chainer Interface
Input Area
The top of the Tool Chainer page has a large text input area. This is where you paste your raw text before any processing. The input persists in localStorage, so it survives tab close.
Step Cards
Below the input, each step in your pipeline appears as a card. Each card shows:
- The processor name
- The configuration options (if the processor has them)
- A live preview of that step's output
- The execution time for that step in milliseconds
- An error message if that step failed
Steps can be reordered by dragging. Click the X to remove a step.
Adding Steps
Click the "Add Step" button to open the processor picker. Processors are organized by category. Click a processor to add it as the next step in the chain.
Live Preview
The output of every step updates in real-time as you type in the input area or change configuration. You never need to click "Run" - the pipeline runs continuously.
Final Output
The output of the last step is the pipeline's result. A prominent copy button lets you copy it to your clipboard.
---
The 6 Processor Categories
Tool Chainer ships with 35 processors across six categories. Here is a detailed breakdown of every processor and when to use each one.
1. Encoding Processors
Encoding processors convert text between different encoding schemes. They are often used at the start of a pipeline (decode incoming data) or at the end (encode output for transmission).
| Processor | What It Does | Common Use Case |
|---|---|---|
| Base64 Encode | Encodes text to Base64 | Encode payload before sending in JSON |
| Base64 Decode | Decodes Base64 to text | Inspect base64-encoded API fields |
| URL Encode | Percent-encodes special characters | Prepare query string parameters |
| URL Decode | Decodes percent-encoded text | Read URL-encoded form data |
| HTML Entities Encode | Converts characters to HTML entities | Prepare user input for HTML output |
| HTML Entities Decode | Converts HTML entities back to text | Read HTML-encoded content |
Use the Base64 Encoder/Decoder standalone tool for one-off operations, but include it in a chain when the encoding step is part of a larger pipeline.
2. Case Processors
Case processors change the capitalization scheme of text. They are particularly useful for normalizing identifiers, converting between different naming conventions, and preparing text for different contexts.
| Processor | Output Example | Use Case |
|---|---|---|
| UPPERCASE | HELLO WORLD | Constants, SQL keywords |
| lowercase | hello world | Normalizing input |
| Title Case | Hello World | Display labels, headings |
| Sentence case | Hello world | Single-sentence outputs |
| camelCase | helloWorld | JavaScript variables |
| PascalCase | HelloWorld | Class names, React components |
| snake_case | hello_world | Python variables, database columns |
| kebab-case | hello-world | CSS classes, URL slugs |
| CONSTANT_CASE | HELLO_WORLD | Environment variables, constants |
A practical conversion pipeline: paste a space-separated phrase like "user profile settings" and apply kebab-case to get a CSS class name (user-profile-settings) or snake_case for a database column (user_profile_settings).
3. Format Processors
Format processors restructure or clean text without changing its fundamental meaning. These are the workhorses for data cleanup tasks.
| Processor | What It Does |
|---|---|
| JSON Prettify | Formats compact JSON with 2-space indentation |
| JSON Minify | Removes all whitespace from JSON |
| Trim Whitespace | Removes leading and trailing spaces from each line |
| Remove Blank Lines | Strips empty lines from the text |
| Sort Lines | Sorts lines alphabetically (ascending) |
| Sort Lines Descending | Sorts lines reverse-alphabetically |
| Reverse Lines | Reverses the order of lines |
| Unique Lines | Removes duplicate lines, keeping the first occurrence |
For log file processing, a standard Format pipeline is: Trim Whitespace -> Remove Blank Lines -> Sort Lines -> Unique Lines. This gives you a deduplicated, alphabetically sorted list of unique log entries.
4. Transform Processors
Transform processors apply structural changes to the text content.
| Processor | What It Does |
|---|---|
| Reverse String | Reverses the entire text character by character |
| ROT13 | Applies the ROT13 Caesar cipher |
| Text to Binary | Converts each character to its 8-bit binary representation |
| Binary to Text | Converts 8-bit binary back to characters |
| Morse Encode | Converts text to Morse code |
| Morse Decode | Converts Morse code back to text |
ROT13 has a specific practical use: it is idempotent (applying it twice returns the original), making it useful for simple content obfuscation in configurations or placeholder data - not for security, but for hiding spoilers or test values that should not be immediately readable.
5. Manipulate Processors
Manipulate processors add, remove, or restructure text in more surgical ways.
| Processor | Configuration | What It Does |
|---|---|---|
| Find & Replace | Pattern (supports regex), replacement | Replaces occurrences of a pattern |
| Add Prefix | Prefix text | Prepends text to every line |
| Add Suffix | Suffix text | Appends text to every line |
| Number Lines | Start number, separator | Adds line numbers (e.g., "1. ", "01. ") |
| Wrap Lines | Max characters | Word-wraps at a character boundary |
| Join Lines | Separator | Joins all lines into one with a separator |
| Split to Lines | Separator | Splits one line into multiple at a separator |
| Extract Lines Matching | Pattern (regex supported) | Keeps only lines matching the pattern |
| Remove Lines Matching | Pattern (regex supported) | Removes lines matching the pattern |
The Find & Replace processor with regex support is one of the most powerful processors in the chain. It accepts JavaScript regex syntax (without the surrounding slashes), so \b\w+\b matches any word.
6. Hash Processors
Hash processors generate cryptographic digests of the input text. These run using the browser's built-in Web Crypto API - no external library is loaded.
| Processor | Output Length | Notes |
|---|---|---|
| SHA-1 | 40 hex chars | Avoid for security; useful for checksums |
| SHA-256 | 64 hex chars | Standard for most security use cases |
| SHA-512 | 128 hex chars | Stronger than SHA-256, longer output |
| MD5 | 32 hex chars | Fast but cryptographically broken; checksums only |
Note: Hash processors are terminal steps by design. The hex output of a hash is not useful as input to most other processors. If you need to re-encode the hash output (for example, Base64-encoding a SHA-256 digest), add an encoding step after the hash step.
---
Real-World Pipeline Examples
Example 1: Cleaning Up a Log File Dump
Scenario: You have a multi-megabyte log file with inconsistent indentation, blank lines from buffered writes, duplicate entries from a retry mechanism, and no line numbers.
Pipeline:
- Trim Whitespace
- Remove Blank Lines
- Sort Lines
- Unique Lines
- Number Lines
What each step does:
After Trim Whitespace, every line starts at column 0. After Remove Blank Lines, the empty gaps are gone. After Sort Lines, related log entries (same component, same error) cluster together alphabetically. After Unique Lines, the duplicates from retries are removed. After Number Lines, you have a clean, numbered, deduplicated list you can grep through or share with a teammate.
Configuration tip: In the Number Lines step, set the start number to 1 and the separator to ". " to get output like:
1. [ERROR] Connection timeout
2. [INFO] Server started
3. [WARN] Memory usage above 80%Example 2: Preparing API Input From CSV Data
Scenario: You have a CSV export of user email addresses and need to prepare them as a JSON array for an API batch operation.
Input:
alice@example.com
bob@example.com
carol@example.comPipeline:
- Trim Whitespace (clean up any extra spaces)
- Add Prefix:
" - Add Suffix:
", - Join Lines:
\n
But this is better done in two stages. First use the pipeline to clean the list, then manually format as JSON. For a true CSV-to-JSON conversion, use the dedicated CSV to JSON tool which handles headers and type inference.
Example 3: Generating SQL INSERT Placeholders
Scenario: You have a list of column names and need to generate a SQL INSERT statement's VALUES placeholder.
Input:
id
name
email
created_atPipeline:
- Add Prefix:
: - Join Lines:
,
Output:
:id, :name, :email, :created_atUse this for prepared statement parameter placeholders in SQLAlchemy, PDO, or similar ORMs.
Example 4: Multi-Format Encoding for Transmission
Scenario: You need to take a JSON payload, minify it, then Base64 encode it for embedding in a URL parameter.
Input: A formatted JSON object
Pipeline:
- JSON Minify
- Base64 Encode
- URL Encode (to make the Base64 URL-safe)
Result: A compact, URL-safe encoded version of the JSON, ready to append as a query parameter.
The reverse pipeline - URL Decode, Base64 Decode, JSON Prettify - reconstructs the original for inspection.
Example 5: Code Identifier Normalization
Scenario: You have a list of human-readable field names from a requirements document and need to convert them to multiple naming conventions simultaneously (create one pipeline per convention).
Input:
User First Name
Account Creation Date
Payment Method TypePipeline A (for database columns): lowercase -> Find & Replace ( , _) -> snake_case
Pipeline B (for JavaScript variables): camelCase
Pipeline C (for CSS classes): kebab-case
Save each pipeline as a JSON export (using the Export button) so you can reload any of them later without rebuilding.
Example 6: Processing Obfuscated Configuration Values
Scenario: Your team stores certain non-sensitive config values as ROT13 to prevent casual reading without using actual encryption. You need to quickly inspect and modify them.
Decode pipeline:
- ROT13 (decode)
- Trim Whitespace
Re-encode pipeline:
- Trim Whitespace
- ROT13 (encode - applying ROT13 twice is identity, so "encode" and "decode" are the same operation)
---
Sharing and Saving Pipelines
Share via URL
Click the Share button to copy a URL that encodes your entire pipeline configuration, including the input text. The pipeline is encoded in the URL's query parameters as a compressed JSON string. Anyone who opens the URL gets your exact setup - no account, no server, no authentication required.
When to use URL sharing:
- Documenting a data processing workflow in a README or wiki
- Sharing a debugging tool with a teammate
- Linking to an example pipeline in a Stack Overflow answer
The URL will be long for complex pipelines or large inputs. For large inputs, prefer Export/Import JSON and share the file separately.
Export / Import JSON
Click Export to download a .json file describing your pipeline's steps and configuration (but not the input text). The file is human-readable:
{
"version": 1,
"steps": [
{ "id": "trim-whitespace", "config": {} },
{ "id": "remove-blank-lines", "config": {} },
{ "id": "sort-lines", "config": { "direction": "asc" } },
{ "id": "unique-lines", "config": {} },
{ "id": "number-lines", "config": { "start": 1, "separator": ". " } }
]
}To reload: click Import and select the file. The pipeline rebuilds instantly.
When to use JSON export:
- Storing reusable pipelines in a project's
scripts/ortools/directory - Versioning pipelines in git alongside the data they process
- Migrating a pipeline to a different browser or machine
Persistent State in localStorage
Your current input and all pipeline steps are automatically saved to localStorage as you work. Close the tab, open a new one, navigate to Tool Chainer - everything is still there. This is silent and automatic; there is no save button because saving is continuous.
If you want a clean slate, use the Reset button to clear both the input and the pipeline steps.
---
Error Handling in Pipelines
When a step fails, the chain halts at the failing step. The step card shows a red error indicator and a message explaining what went wrong. All steps after the failing one are grayed out with a "Waiting for previous step" message.
Common failure scenarios:
Base64 Decode on Non-Base64 Input
If you add a Base64 Decode step and the input (or the output of the previous step) is not valid Base64, the decoder throws an error. The fix is usually to add a Trim Whitespace step before the Base64 Decode - whitespace characters are not in the Base64 alphabet and cause decode failures.
JSON Operations on Non-JSON Input
JSON Prettify and JSON Minify will fail on malformed JSON. The error message includes the parse error, which helps you identify whether you have a missing bracket, a trailing comma, or unquoted keys. Use the JSON Formatter standalone tool for detailed error diagnostics before building a chain around JSON operations.
Regex Errors in Find & Replace
If you enter an invalid regex pattern in the Find & Replace processor, the step will fail with a regex syntax error. Common mistakes: unescaped special characters (+, ?, (, )) and unmatched brackets.
Chaining After a Hash Step
If you add processors after a hash step, they receive the hex digest as input. Most processors handle this gracefully, but some transforms (like JSON operations) will fail on hex input. Hashing should typically be a terminal step.
---
Performance Characteristics
All processors run synchronously in JavaScript. For typical inputs (a few KB of text), every step completes in under 1 millisecond. For large inputs (hundreds of KB), some processors are slower:
| Processor | Performance at 100KB |
|---|---|
| Trim Whitespace | ~2ms |
| Sort Lines | ~10ms (sort is O(n log n)) |
| Unique Lines | ~5ms |
| JSON Prettify | ~8ms |
| SHA-256 | ~3ms (Web Crypto API is fast) |
| Find & Replace (regex) | Varies with pattern complexity |
The live preview updates on every keystroke, so if you are processing very large inputs, you may notice a slight lag while typing. For inputs above 1MB, consider processing in chunks or using a command-line tool like awk or sed for the bulk transformation, then using Tool Chainer for the refinement steps.
---
Tool Chainer vs. Shell Pipelines
Developers familiar with Unix pipelines will recognize Tool Chainer as a browser-native equivalent. Here is a comparison of equivalent operations:
| Tool Chainer Pipeline | Shell Equivalent | |
|---|---|---|
| Trim Whitespace -> Remove Blank Lines | `sed 's/^ *//;s/ *$//' file.txt \ | sed '/^$/d'` |
| Sort Lines -> Unique Lines -> Number Lines | `sort -u file.txt \ | nl` |
| Base64 Encode | base64 file.txt | |
| SHA-256 Hash | sha256sum file.txt | |
Find & Replace (foo, bar) | sed 's/foo/bar/g' file.txt | |
| lowercase | tr '[:upper:]' '[:lower:]' < file.txt |
Tool Chainer advantages over shell pipelines:
- No shell required - works in any browser, including on Windows without WSL
- Visual feedback at every step - see intermediate outputs without
tee - Shareable URLs - send the pipeline to someone without sharing a shell command
- No syntax memorization - pick processors from a visual list
Shell pipeline advantages over Tool Chainer:
- Arbitrary programs in the pipeline - not limited to built-in processors
- Handles binary data and files, not just text
- Faster for very large inputs
- Composable with the full Unix toolbox (
grep,awk,jq, etc.)
For most everyday text processing tasks in a browser context, Tool Chainer is faster to set up and easier to share.
---
Designing Good Pipelines
Keep Steps Focused
Each step should do one thing. If you find yourself wanting a step that "cleans up the JSON AND sorts the keys", split it into two steps: JSON Prettify (which sorts keys by default in many implementations) is already doing the work.
Order Matters for Encoding Steps
Always decode before transforming, and encode after transforming. The wrong order - for example, uppercasing a Base64 string before decoding it - produces incorrect output because Base64 is case-sensitive.
Wrong order: UPPERCASE -> Base64 Decode
Correct order: Base64 Decode -> UPPERCASE
Add Whitespace Cleanup Early
Trim Whitespace and Remove Blank Lines are cheap operations. Adding them at the start of a pipeline prevents whitespace from causing failures in later steps (particularly Base64 and JSON operations).
Test With Representative Input
The pipeline's live preview makes it easy to test against your actual data. Before saving or sharing a pipeline, test it against edge cases: empty input, input with special characters, input that is already in the target format.
Name Your Exports Descriptively
When you export a pipeline to JSON, the file is named pipeline.json by default. Rename it to something descriptive before saving, like normalize-api-response.json or generate-sql-placeholders.json. This makes your collection of saved pipelines navigable without opening each one.
---
Integration With Other ToolBox Tools
Tool Chainer is most effective when combined with other tools for tasks that fall outside the 35 built-in processors.
Before the chain:
- Use CSV to JSON to convert tabular data into a JSON structure before piping it through JSON processors
- Use YAML/JSON Converter to convert YAML to JSON before JSON processing steps
- Use Diff Checker to verify what changed between two versions of a file before building a cleanup pipeline
After the chain:
- Use JSON Schema Generator to generate a schema from the cleaned JSON output
- Use Hash Generator for more detailed hash operations if the pipeline's hash output is not sufficient
- Use Slug Generator to convert pipeline output to URL-friendly slugs
---
Advanced Use Cases
Building a Data Sanitization Pipeline
When working with data from external sources - web scraping, legacy exports, third-party APIs - the data often contains artifacts that need removal before processing. A sanitization pipeline might look like:
- Trim Whitespace (remove edge whitespace from each line)
- HTML Entities Decode (convert
&,<,>etc. back to characters) - Remove Lines Matching (remove lines containing "N/A" or "--" placeholder values)
- Remove Blank Lines (remove now-empty lines)
- Sort Lines (alphabetize for deduplication)
- Unique Lines (remove duplicates introduced by the data source)
This pipeline handles most of the common data quality issues in a single pass.
Building a Checksum Pipeline
When you need to verify the integrity of text data - ensuring a configuration value has not changed, comparing two versions of a template - a checksum pipeline generates the fingerprint:
- Trim Whitespace (normalize edge whitespace so checksums are stable)
- Remove Blank Lines (normalize blank lines)
- SHA-256 (generate the fingerprint)
The resulting hash is stable for any two inputs that differ only in edge whitespace and blank lines, but changes for any substantive content difference.
Store the hash alongside the original. Run the same pipeline on the current version and compare - if the hashes match, nothing changed.
Building a Markdown List Generator
You have a list of raw items (one per line) and want to generate a Markdown bullet list for documentation:
- Trim Whitespace
- Remove Blank Lines
- Add Prefix:
-
Output:
- Item one
- Item two
- Item threeExtend the pipeline with a different prefix to generate a numbered list format for other documentation systems:
- Trim Whitespace
- Remove Blank Lines
- Number Lines (with separator ". ")
Output:
1. Item one
2. Item two
3. Item three---
Processor Quick Reference
Here is a complete reference of all 35 processors in a single table:
| Category | Processor | Input | Output |
|---|---|---|---|
| Encoding | Base64 Encode | Any text | Base64 string |
| Encoding | Base64 Decode | Base64 string | Original text |
| Encoding | URL Encode | Text with special chars | Percent-encoded string |
| Encoding | URL Decode | Percent-encoded string | Decoded text |
| Encoding | HTML Encode | Text with <, >, & | HTML entities |
| Encoding | HTML Decode | HTML entity text | Plain text |
| Case | UPPERCASE | Any text | ALL CAPS |
| Case | lowercase | Any text | all lowercase |
| Case | Title Case | Any text | First Letter Capitalized |
| Case | Sentence case | Any text | First word capitalized |
| Case | camelCase | Space-separated words | camelCaseOutput |
| Case | PascalCase | Space-separated words | PascalCaseOutput |
| Case | snake_case | Space-separated words | snake_case_output |
| Case | kebab-case | Space-separated words | kebab-case-output |
| Case | CONSTANT_CASE | Space-separated words | CONSTANT_CASE_OUTPUT |
| Format | JSON Prettify | Valid JSON | Indented JSON |
| Format | JSON Minify | Valid JSON | Compact JSON |
| Format | Trim Whitespace | Text with edge spaces | Trimmed text |
| Format | Remove Blank Lines | Text with empty lines | Text without blank lines |
| Format | Sort Lines Asc | Multi-line text | Alphabetically sorted |
| Format | Sort Lines Desc | Multi-line text | Reverse-alphabetical |
| Format | Reverse Lines | Multi-line text | Lines in reverse order |
| Format | Unique Lines | Multi-line text with dupes | Deduplicated lines |
| Transform | Reverse String | Any text | Reversed character-by-character |
| Transform | ROT13 | Text | ROT13 encoded/decoded |
| Transform | Text to Binary | Text | Space-separated 8-bit binary |
| Transform | Binary to Text | 8-bit binary | Original text |
| Transform | Morse Encode | Latin characters | Morse code |
| Transform | Morse Decode | Morse code | Latin characters |
| Manipulate | Find & Replace | Text + pattern + replacement | Modified text |
| Manipulate | Add Prefix | Text + prefix string | Prefix added to each line |
| Manipulate | Add Suffix | Text + suffix string | Suffix added to each line |
| Manipulate | Number Lines | Text + start + separator | Numbered lines |
| Hash | SHA-1 | Any text | 40-char hex digest |
| Hash | SHA-256 | Any text | 64-char hex digest |
| Hash | SHA-512 | Any text | 128-char hex digest |
---
Frequently Asked Questions
Q: Can I process binary data (images, PDFs) in Tool Chainer?
A: Tool Chainer processes text strings only. Binary data must be Base64-encoded before it can be manipulated as text. You can then decode, inspect, and re-encode, but you cannot apply most text processors meaningfully to binary content.
Q: Is there a limit to how many steps I can add?
A: There is no hard-coded step limit. Practically, pipelines with more than 15-20 steps become difficult to read and manage. If your pipeline is very long, consider whether it could be split into two pipelines - save the output of the first as the input for the second.
Q: Can I use Tool Chainer for multi-line inputs with thousands of lines?
A: Yes. Sort Lines and Unique Lines are O(n log n) operations that scale well to thousands of lines. For inputs above a few megabytes, you may notice a performance impact on the live preview. Disabling live preview (if that option is available) and clicking Run manually helps for very large inputs.
Q: Can the Find & Replace processor use capture groups?
A: Yes. The Find & Replace processor uses JavaScript's String.prototype.replace() with a RegExp, so capture group references like $1 and $2 work in the replacement string. For example, pattern (\w+)\s(\w+) with replacement $2 $1 swaps the first two words.
Q: Does Tool Chainer work offline?
A: Yes. Tool Chainer uses no external resources at runtime. All processors are implemented in client-side JavaScript. Once the ToolBox page has loaded (which requires an internet connection the first time), you can disconnect from the network and Tool Chainer will continue to work. ToolBox is also installable as a PWA, which caches assets for offline use.
---
Try It Now
Open Tool Chainer and click "Load Sample" to see a demo pipeline in action. The sample demonstrates five chained processors on a realistic input - a good starting point for building your own.
From there, clear the sample, paste your own text, and add processors one at a time. Watch the live preview update after each step. If a step breaks, the error message points you directly at what went wrong.
Every processor runs entirely in your browser. No data is sent to any server. The pipeline you build is yours to export, share, and reuse.
For complex data transformations that require format conversion before the pipeline, start with CSV to JSON or YAML/JSON Converter, then bring the result into Tool Chainer for text-level processing.
Related Tools
Free, private, no signup required
Cron Expression Parser
Free online cron expression parser - parse cron expressions and see the next scheduled run times
Crontab Generator
Free online crontab generator - build cron expressions visually with an intuitive interface
Regex Tester
Free online regex tester - test and debug regular expressions with live matching and highlights
You might also like
Want higher limits, batch processing, and AI tools?