Skip to content
Deftkit

JSON ⇄ CSV Converter

Convert between JSON and CSV in both directions. Handles nested objects with dot-notation flattening and RFC 4180 quoting. Runs entirely in your browser.

Delimiter:
Nested:
3 rows
4 columns

Two formats that share 80% of their use cases

JSON and CSV are the two most common data-exchange formats on the web, and they cover surprisingly similar ground. CSV is old, simple, and understood by every spreadsheet app on earth. JSON is newer, richer, and native to every modern API. The friction is only at the border: converting between them is a recurring chore because legacy tools prefer CSV (Excel, BI systems, accounting software) while new tools prefer JSON (APIs, config files, databases).

This tool converts both directions, runs entirely in your browser, handles nested objects with flattening, and produces RFC 4180-compliant CSV output that any spreadsheet will parse correctly.

JSON → CSV: the flattening problem

CSV is strictly flat: a grid of rows and columns. JSON can be nested arbitrarily deep. So converting JSON to CSV always involves a choice about how to flatten nested structures. This tool offers two strategies.

Dot notation (default)

Nested keys become joined with dots. An object like {"user": {"name": "Ada"}} flattens to a single column user.name. Arrays are indexed: tags[0], tags[1] become tags.0 and tags.1. Use this when every row has the same nested shape, and you want the CSV reader (Excel, pandas, etc.) to see every leaf value as a distinct column.

Input (JSON):
[
  {
    "id": 1,
    "name": "Ada",
    "address": { "city": "London", "zip": "NW1" },
    "skills": ["math", "poetry"]
  }
]

Output (CSV, dot style):
id,name,address.city,address.zip,skills.0,skills.1
1,Ada,London,NW1,math,poetry

JSON string (alternative)

Nested values get stringified as a JSON blob in a single cell. Use this when arrays are variable-length (row 1 has 3 tags, row 2 has 5), or when you want to preserve the nested structure for round-tripping back to JSON later.

Output (CSV, stringify style):
id,name,address,skills
1,Ada,"{""city"":""London"",""zip"":""NW1""}","[""math"",""poetry""]"

Notice the doubled quotes — that's the RFC 4180 way to escape a literal quote inside a quoted field. Every spreadsheet app understands this.

CSV → JSON: RFC 4180 parsing

CSV is a deceptively simple format. The hard parts:

  • Fields containing the delimiter: "hello, world" is ONE field, not two. The quotes group the comma as data
  • Fields containing quotes: a literal quote inside a quoted field is written as two quotes: "She said ""hi""" decodes to She said "hi"
  • Fields containing newlines: a quoted field can span multiple lines. Parsing this correctly requires tracking quote state character-by-character
  • Mixed line endings: Windows CSVs use CRLF, Unix ones use LF. A correct parser handles both
  • Trailing empty lines: usually should be dropped, but sometimes represent intentional empty rows

This tool implements a pure-JS character-by-character parser that follows RFC 4180 for all these cases — no shortcuts, no regex hacks. The parser is ~60 lines of code and handles every edge case you'll hit in practice.

Type inference

CSV is untyped — every field is a string. When converting to JSON, this tool can optionally infer types: numbers, booleans, and nulls get parsed into their proper JSON types so your downstream code doesn't have to.

With type inference ON:
{ "id": 1, "active": true, "notes": null }

With type inference OFF (strings only):
{ "id": "1", "active": "true", "notes": "null" }

Inference rules:

  • true / false (lowercase) → boolean
  • null (lowercase) → null
  • Any field matching the JSON number grammar (-?\d+(\.\d+)?([eE][+-]?\d+)?) → number, but only if the round-trip is exact (no leading zeros that would be lost, no precision loss)
  • Everything else → string

Turn inference OFFwhen you have fields that look like numbers but should stay strings. The canonical example: phone numbers, US ZIP codes (00214 → 214 loses the leading zero), credit card numbers, scientific IDs. If your data contains any of these, uncheck "Parse numbers/booleans".

How to use this tool

  1. Pick a direction: JSON → CSV or CSV → JSON
  2. Paste your input on the left. The output updates live
  3. For JSON → CSV: choose the flattening style (Dot or JSON string) and the delimiter (comma, tab, or semicolon)
  4. For CSV → JSON: choose the delimiter (or click Auto-detectto have the tool pick based on what's most common in your input), toggle type inference, and pick an indent for the output
  5. Click Swap sides to flip direction and move the current output to the input — useful for round-trip testing
  6. Click Flat sample or Nested sample to load example data in the current direction
  7. Click Copy to put the output on your clipboard. Stats at the bottom show the row and column count

Delimiter choice: comma vs semicolon vs tab

The "C" in CSV stands for comma, but real-world CSV files use several delimiters:

  • Comma (,) — the default. Used by Excel exports outside Europe, pandas, most APIs
  • Semicolon (;) — common in European locales where comma is the decimal separator. A German or French Excel will default to semicolon CSVs to avoid confusing "1,5" (which means 1.5) with a field separator
  • Tab (\t) — TSV, "tab-separated values". Used when fields commonly contain commas (addresses, prose, names with titles). Preferred for exports from databases and spreadsheet copy-paste

The auto-detect button counts the three candidates in the first few lines of your input and picks whichever appears most often. It works for 95% of inputs. For the remaining 5% (files where the delimiter is inside quoted fields more often than outside), pick manually.

Common gotchas

  • Excel number formatting: if you open the output CSV in Excel, it may auto-format long numbers in scientific notation (1.23E+15) and strip leading zeros. This is Excel's doing, not the CSV's. Use Data → Get Data → From Text/CSV with explicit type hints to avoid it, or import into Google Sheets which is less aggressive
  • UTF-8 BOM: Excel on Windows used to need a byte-order mark at the start of UTF-8 CSV files to recognize non-ASCII characters. Modern Excel handles UTF-8 without a BOM, but some older versions don't. If accented characters look like garbage in Excel, you may need to prepend \uFEFF
  • Variable-length arrays: if your JSON has rows with different array lengths ( tags: ["a"] vs tags: ["a", "b", "c"]), dot flattening creates columns like tags.0, tags.1, tags.2 where some rows have empty values. The stringify strategy handles this more gracefully
  • Missing keys: if row 1 has {"a": 1, "b": 2} and row 2 has {"a": 1, "c": 3}, the union of keys is a, b, c and missing values are rendered as empty strings
  • Key order: the CSV column order follows the first-seen order of keys across all rows. This is usually what you want but can surprise you if row 1 has a different schema from row 2

Privacy

JSON parsing uses the browser's native JSON.parse. CSV parsing and flattening are pure JavaScript running in your browser. Nothing is uploaded, nothing is logged, nothing leaves your device. Safe for customer exports, financial CSVs, and unreleased data.

Frequently asked questions

Which is better for large files, JSON or CSV?

CSV is almost always smaller and faster to parse than JSON for the same tabular data — no quotes around keys, no punctuation between rows, no nesting overhead. For 100,000+ row datasets, CSV is the right format on the wire. JSON wins when the data has nested structure that would require lossy flattening to fit into CSV.

Can I convert CSV to nested JSON?

Not directly — CSV is flat by definition. If your CSV headers use dot notation ( user.name, user.age), you could post-process the JSON output to un-flatten, but this tool returns flat objects keyed by the raw header names. For a true nested output, you'd need a custom transform step after the convert.

Does this handle very large files?

Up to ~10 MB of input is comfortable in a browser tab. Above that, the DOM textarea becomes sluggish (rendering the output string as text is the bottleneck, not the conversion logic). For larger files, use command-line tools like jq, csvkit, or pandas.

Why does my CSV have doubled quotes everywhere?

RFC 4180 requires that any literal " inside a quoted field be written as "". This looks noisy but it's the only way a parser can distinguish "the field ends here" from "a literal quote in the data". The tool's CSV → JSON direction decodes them back to single quotes automatically.

What happens if the first row is not a header?

The tool always treats row 1 as headers when converting CSV → JSON. If your CSV has no header row, add one before pasting (e.g., col1,col2,col3), or use a different approach: parse the CSV, then map rows to a tuple/array structure instead of objects.

Can I convert to Excel directly?

Not to .xlsx — that's a binary format. But the CSV output opens directly in Excel, Google Sheets, Numbers, and LibreOffice Calc. Copy the CSV and paste into a blank sheet, or save as data.csv and double-click.

Is my data sent anywhere?

No — the entire conversion runs in your browser. No network calls, no analytics on the data you paste. Safe for internal exports, customer records, and sensitive business data.