I'm Cedric, a CTO based in Copenhagen, Denmark. I built Utilitiz to solve a specific
problem I kept running into: translating and uploading large JSON data files to PIM
systems using AI-powered translation scripts. The challenge was twofold. First, the data
was often too large for API limits. When your JSON file is 50MB and your API has a 5MB
limit, you're stuck. Second, the data was cluttered with empty values that prevented clean
uploads and broke automated workflows.
After leading technical teams and building enterprise AI systems for content translation
and data processing, I needed a tool that didn't exist. Something that could clean, split,
and merge JSON files instantly without uploading sensitive product data to third-party
servers. When you're dealing with proprietary product catalogs and customer information,
privacy isn't optional. So I built this tool to run entirely in your browser, where your
data never leaves your computer.
Why Use Utilitiz?
Clean JSON
Automatically removes null values, undefined properties, empty strings, and empty
objects or arrays from your JSON data. Perfect for cleaning API responses and reducing
file sizes.
Split JSON
Break large JSON files into smaller, manageable parts. Split by file size (KB) or by
attribute count to meet API limits, email attachments, or storage requirements.
Merge JSON
Combine multiple JSON objects or arrays into a single file. Supports deep merging for
objects and array concatenation, making it easy to consolidate data from multiple
sources.
Quick Start Guide
1
Paste or Drop
Paste your JSON data into the input box, or drag and drop a .json file directly onto
the input area.
2
Choose Mode
Select Clean to remove empty values, Split to break large files into parts, or Merge
to combine multiple JSON objects.
3
Process & Download
Click the process button (or press Ctrl+Enter) and download your result. All
processing happens locally in your browser.
Why JSON Utilities Matter
JSON manipulation might seem trivial until you're dealing with real production data. In my
experience building AI translation systems for e-commerce companies, inefficient JSON
handling costs time, bandwidth, and money. A 5MB API response with 40% null values wastes
2MB of bandwidth per request. Multiply that by thousands of API calls per day, and you're
looking at significant network overhead and slower response times.
When I was implementing automated product data translation workflows for international
expansion, we discovered that splitting a 100MB JSON export saved our team 3 hours of
manual processing time. Instead of trying to upload massive files that would timeout or
exceed API limits, we could break them into manageable chunks that processed reliably.
This wasn't just about convenience, it was about making automated workflows actually work
in production.
Performance implications extend beyond bandwidth. Bloated JSON files with unnecessary
empty values increase storage costs in databases and slow down indexing operations. In one
project, cleaning JSON data before database insertion reduced our storage footprint by 35%
and improved query performance measurably. Empty objects and null fields don't just waste
space, they also complicate data validation and can cause unexpected errors in downstream
processing.
Consider a typical scenario: you're integrating with a third-party API that returns
verbose JSON with many optional fields. Your application only needs a subset of that data.
If you store everything including hundreds of null values, you're paying for storage you
don't need and making your own APIs slower when they query that data. Cleaning JSON isn't
about perfectionism, it's about building efficient, maintainable systems.
New to JSON manipulations? Try with our example:
Input JSON
(Paste or drag & drop files)
Drop files here
Cleaned JSON
Split JSON Parts
How splitting works:
By File Size: Splits your JSON so each part stays under your
specified KB limit (default: 8KB)
By Attribute Count: Splits arrays every N items or objects every N
keys (default: 100)
Each resulting part will display its size and have a download button.
Merged JSON
Best Practices for JSON Manipulation
When to Clean JSON
Clean JSON before storage when you're inserting data into databases or caching systems.
Empty values waste storage space and complicate queries. Clean before transmission when
you're making API requests with payload size limits. And clean before processing when
empty values might cause validation errors or unexpected behavior in your application
logic. In my experience, cleaning data at the boundary points (before storage, before
transmission) prevents issues from propagating through your system.
Optimal Splitting Thresholds
For API limits, split with a safety margin. If your API limit is 5MB, target 4.5MB parts
to account for additional headers or metadata. For email attachments, keep parts under
10MB for reliable delivery across different email providers. For cloud storage multipart
uploads like AWS S3, stay within the 5MB minimum and 5GB maximum part size requirements.
When splitting for parallel processing, consider your system's CPU and memory constraints,
more parts aren't always better if they overwhelm your system.
Merge Conflict Strategies
When merging objects with duplicate keys, remember that later values overwrite earlier
ones. If you need to preserve all values, restructure your data before merging. For
instance, if multiple objects have a "settings" key, consider nesting them under unique
identifiers first. When merging arrays, order matters. If you're combining data from
different sources and need to maintain relationships, ensure your arrays are sorted
consistently before merging.
Performance Tips for Large Files
For JSON files over 50MB, consider splitting first even if your end goal is cleaning or
merging. Processing smaller chunks reduces the risk of browser crashes from memory
exhaustion. On mobile devices, keep processing under 25MB per operation. If you're
building automated workflows, add file size checks before processing and route large files
through backend systems with more resources. I learned this lesson the hard way when
browser tabs crashed processing 100MB+ files on standard laptops.
Data Integrity Considerations
Before cleaning, verify that your empty values are actually unnecessary. A status field
with value 0 or false is semantically different from null or missing. In one project, we
accidentally removed critical "enabled: false" flags because our cleaning logic was too
aggressive. Always test cleaning logic with representative data before running it on
production datasets. Similarly, when splitting data with foreign key relationships, ensure
you're not breaking references between related records.
Common Use Cases
API Development and Testing
When developing APIs, test data often includes placeholder values and optional fields
filled with null. Before deploying to production, cleaning this data ensures your API
responses are lean and focused. For example, a user profile API might return 30 fields
where 12 are null for basic accounts. Cleaning reduces the response from 2.1KB to 1.3KB,
a 38% reduction. Multiply that across thousands of API calls and the bandwidth savings
become significant.
Data Migration and ETL
Migrating data between systems often involves exporting massive JSON files. A database
export of 100MB exceeds most API and cloud storage limits. Splitting this export into
10MB chunks enables reliable upload through standard APIs without custom solutions. In
one migration project, we split a 200MB product catalog into 25 parts for AWS S3
multipart upload, completing a migration that previously failed due to timeout errors.
Configuration Management
Modern applications use JSON for configuration across multiple environments. A base
config file defines defaults, environment-specific files override certain values, and
merging them produces the final runtime configuration. For instance, merging base.json,
production.json, and secrets.json creates your production config with database
credentials, feature flags, and API endpoints all in one place. The merge order matters:
later files override earlier ones for conflict resolution.
Database Import Preparation
Many databases validate JSON structure during import. A field with value null might
violate a NOT NULL constraint, but a missing field uses the column default. Cleaning
JSON before import transforms null values into missing keys, preventing validation
errors. I encountered this when importing user preferences into PostgreSQL. The original
data had "notification_email": null for users who hadn't set preferences. After
cleaning, those keys were removed and the database applied the default value correctly.
Debugging and Development
When debugging APIs, verbose responses with dozens of null fields obscure the actual
data. Cleaning the response reveals what's actually present, making it easier to spot
issues. Similarly, when writing tests, clean JSON fixtures are easier to read and
maintain. Test data with only relevant fields makes test intent clearer and reduces the
noise when tests fail and you need to compare expected versus actual output.
Content Translation Workflows
Translating product catalogs for international expansion presents unique challenges.
Product data exported from PIM systems often exceeds AI translation API limits. A
catalog with 5,000 products at 2KB each produces a 10MB JSON file, but many translation
APIs cap requests at 1-2MB. Cleaning removes empty descriptions and null values, then
splitting breaks the catalog into API-friendly chunks. After translation, merging
reconstructs the full multilingual catalog for PIM upload.
Frequently Asked Questions
Is my JSON data secure?
Yes, all processing happens entirely in your browser using JavaScript. Your JSON data
never leaves your computer and is not sent to any server. You can verify this by opening
your browser's network inspector, there are no outbound requests after the initial page
load. This makes the tool safe for sensitive data including API keys, customer
information, and proprietary business data.
What happens to nested empty values?
The cleaning algorithm works recursively, walking through your entire JSON tree. It
removes empty values at any depth and then checks if parent objects became empty as a
result. If so, those parent objects are removed too. This cascading cleanup ensures you
don't end up with empty container objects after removing their contents. For example, if
an object contains only a nested object that itself only contains null values, both the
nested and parent objects are removed.
Can it handle circular references?
No, JSON itself doesn't support circular references. If you try to process JavaScript
objects with circular references, JSON.stringify() will throw an error. This is a
limitation of the JSON format, not the tool. Circular references must be resolved before
converting to JSON, typically by removing the circular link or representing the
relationship differently.
How large a file can it process?
The tool is limited by your browser's available memory, which varies by device and
browser. I've successfully tested files up to 100MB on desktop browsers with 8GB+ of
system RAM. For mobile devices with limited memory, keep files under 25-30MB to avoid
crashes. If you're consistently working with larger files, consider using backend
processing or command-line tools designed for bulk data manipulation.
Does it preserve the order of object keys?
Modern JavaScript engines preserve insertion order for object keys, and this tool
maintains that order during processing. However, JSON spec technically doesn't guarantee
key order, so you shouldn't rely on order for objects. If order matters for your use
case, use arrays instead of objects, or include an explicit ordering field in your data
structure.
What about Unicode and special characters?
The tool fully supports Unicode characters including emoji, accented letters, and
characters from non-Latin scripts. All processing uses JavaScript's native string
handling, which is Unicode-aware. You can safely process JSON containing text in any
language. The only potential issue is with very old browsers that have incomplete
Unicode support, but any modern browser handles this correctly.
Does it work offline?
After the initial page load, yes. The tool runs entirely in your browser with no server
dependencies. If you load the page while online, you can continue using it after
disconnecting from the internet. For true offline use, you could save the HTML, CSS, and
JavaScript files locally, though updates and bug fixes would require re-downloading.
Can I automate this process?
Not directly through this web interface, but you can replicate the functionality in your
own code. The tool is built with vanilla JavaScript, and the core algorithms are
straightforward. If you need automated JSON manipulation in a backend process, consider
using command-line tools like jq for cleaning and splitting, or write a simple script in
your preferred language. The advantage of this web tool is convenience for one-off tasks
without writing code.
How is this different from online JSON validators?
Validators check if your JSON syntax is correct but don't modify your data. This tool
transforms your JSON by cleaning empty values, splitting into parts, or merging multiple
files. Think of validators as proofreading, while this tool is editing. You might use a
validator to find syntax errors, then use this tool to remove unnecessary empty fields
from valid JSON.
Why not use command-line tools like jq?
Command-line tools like jq are powerful and efficient, especially for automated
workflows and complex transformations. But they require installation, learning a query
language, and comfort with the terminal. This web tool provides instant access without
installation, a visual interface, and no learning curve. Use jq for scripted automation
and complex queries. Use Utilitiz for quick one-off tasks when you need results
immediately without leaving your browser.
What if my JSON contains dates or special number values?
Dates in JSON are typically represented as ISO 8601 strings like "2026-01-28T10:30:00Z",
which the tool handles normally. However, JavaScript Date objects must be serialized to
strings before becoming valid JSON. Special number values like NaN, Infinity, and
-Infinity are not valid in JSON. If you try to process JavaScript objects containing
these values, JSON.stringify() will convert them to null, which may then be removed by
the cleaning function. Ensure your data uses JSON-compatible formats before processing.
Can I process multiple files at once?
For merging, yes. You can drag and drop multiple JSON files into the input area, and
they'll be concatenated for processing. For cleaning or splitting, process one file at a
time. If you need to clean or split many files, you'll need to process them
sequentially. For bulk operations on dozens of files, a command-line solution or custom
script would be more efficient than using a web interface repeatedly.
Common Pitfalls to Avoid
Accidentally Removing Meaningful Values
The most common mistake is cleaning JSON that contains semantically meaningful zero,
false, or empty string values. A status field with value 0 (inactive) is very different
from null (unknown) or missing (not applicable). I once spent 2 hours debugging why my API
authentication failed, only to realize cleaning removed a critical "access_level: 0" field
that indicated guest access. The system interpreted the missing field as requiring default
access, which was admin level. Always review your data schema before aggressive cleaning.
Breaking Object Relationships
When splitting arrays of objects that reference each other through IDs, you can
accidentally separate related records. For example, splitting an orders array with nested
line items might put an order in one part and its line items in another if your split
point falls between them. This breaks the data integrity. For relational data, split at
logical boundaries or keep related records together even if it means slightly uneven part
sizes.
Merge Conflicts Causing Silent Data Loss
When merging objects with duplicate keys, the last value wins. If you're not aware of
conflicts, you might lose important data without realizing it. For instance, merging three
config files where each defines a "database" object results in only the last file's
database config surviving. To prevent silent data loss, check for conflicts before merging
or structure your data to avoid key collisions by nesting under unique identifiers.
Character Encoding Issues
Copy-pasting JSON from some applications or terminals can introduce invisible characters
like the byte order mark (BOM) or non-breaking spaces. These break JSON parsing even
though the data looks correct visually. If you're getting parsing errors on seemingly
valid JSON, try pasting into a text editor that shows hidden characters, or use a hex
editor to spot non-ASCII bytes. Alternatively, manually retype the first few characters
rather than pasting.
Browser Memory Exhaustion
Processing very large files can crash browser tabs if they exceed available memory.
Different browsers handle this differently, Chrome typically allows more memory than
Firefox on the same system. If you're regularly hitting crashes with large files, reduce
the file size, use a different browser, or switch to command-line tools. I learned to
check file sizes before processing after crashing a dozen tabs trying to clean a 150MB
export on a laptop with 8GB RAM.
Forgetting About Downstream Dependencies
When cleaning or transforming JSON that feeds into other systems, ensure those systems can
handle the modified structure. A field that was always present might have become optional
after cleaning. A downstream validation rule might expect exactly 100 items per batch, but
your splitting created 103 and 97 item parts. Always test transformed data with consuming
systems before running transformations on production data.
How It Works: Technical Details
Cleaning Algorithm
The cleaning algorithm uses recursive traversal to walk through your entire JSON
structure. Starting from the root, it examines each value and removes keys with null,
undefined, empty string, or empty array/object values. The algorithm operates with O(n)
time complexity, where n is the number of nodes in your JSON tree, making it efficient
even for large files.
What makes this approach effective is the recursive cleanup. After removing empty values
from a nested object, the algorithm checks if that object itself became empty. If so, it
removes the parent key as well. This cascading cleanup ensures you don't end up with empty
container objects littering your data structure. Memory usage is minimal because the
algorithm modifies the structure in place rather than creating deep copies.
Splitting Strategies
The tool offers two splitting strategies because different use cases require different
approaches. Size-based splitting calculates the byte size of your JSON and divides it into
chunks that stay under your specified limit. This is ideal when you're dealing with API
limits, email attachment restrictions, or cloud storage multipart upload requirements. For
instance, AWS S3 multipart uploads require parts between 5MB and 5GB, making size-based
splitting essential.
Attribute-based splitting, on the other hand, divides arrays by item count or objects by
key count. This approach is better for batch processing scenarios where you want
consistent chunk sizes for parallel processing or database batch inserts. If you're
importing 10,000 products into a database that performs best with 100-record batches,
attribute-based splitting gives you exactly that.
The size calculation accounts for JSON formatting overhead, including quotes, braces, and
commas. This ensures the generated JSON parts, when serialized, actually stay under your
byte limit. The splitting algorithm maintains data integrity by never splitting in the
middle of an object or array element, your data structure remains valid and usable.
Merging Logic
The merge function intelligently detects whether you're working with arrays or objects.
When merging arrays, it concatenates them in the order provided. When merging objects, it
performs a shallow merge where later values overwrite earlier ones for duplicate keys.
This matches the behavior most developers expect and is consistent with how JavaScript's
Object.assign() and the spread operator work.
Why shallow merge instead of deep merge? In my experience, deep merging often causes
unexpected behavior when you have nested objects with the same keys. Shallow merging makes
the behavior predictable: the last value wins. If you need deep merging for specific use
cases, you can merge the parts manually or use specialized tools designed for that
purpose.
Performance Characteristics
All operations run synchronously in the browser's main thread. For most JSON files under
50MB, processing takes under a second on modern hardware. The tool has been tested with
files up to 100MB, though browser memory limits vary by device. On mobile devices with
limited RAM, you may encounter performance issues with files larger than 25-30MB.
The operations are memory-efficient because they don't create unnecessary intermediate
copies. The cleaning algorithm modifies structures in place, and the splitting function
generates parts on demand rather than holding the entire split dataset in memory. This
design allows the tool to handle larger files than you might expect from a browser-based
application.
Security Model
Client-side processing isn't just a convenience, it's a security requirement. When you're
working with sensitive data like API keys, customer information, or proprietary product
catalogs, uploading that data to a third-party server introduces risk. Even with HTTPS,
you're trusting that server to handle your data responsibly and securely.
By processing everything in your browser, your JSON data never crosses the network. The
JavaScript code runs locally, performs the operations, and provides results for download.
Nothing is logged, stored, or transmitted. You can verify this by opening your browser's
network inspector, you'll see no requests to external servers after the initial page load.
This makes the tool safe for use with production data, test data containing real
information, and any other sensitive JSON you need to manipulate.
Choose how to split your JSON:
Best for: API limits, email attachments, upload restrictions
Best for: Batch processing, database imports, consistent chunks