JSON to SQL - Generate CREATE TABLE & INSERT Statements Online
Convert JSON data to SQL CREATE TABLE and INSERT statements for MySQL, PostgreSQL, SQLite, and SQL Server — 100% in your browser.
Frequently Asked Questions
What is a JSON to SQL converter?
A JSON to SQL converter is a tool that transforms JSON data into SQL statements. It generates CREATE TABLE statements (DDL) that define the database schema based on the structure and types found in your JSON, and INSERT INTO statements (DML) that populate the table with data. The tool automatically infers SQL column types from JSON values: strings become VARCHAR or TEXT, numbers become INT or FLOAT, booleans become BOOLEAN or BIT depending on the database dialect, and nested objects are stored as JSON columns.
How do I convert JSON to SQL statements?
Paste your JSON data (a single object or an array of objects) into the input field. Select your target SQL dialect (MySQL, PostgreSQL, SQLite, or SQL Server), enter a table name, and choose your output mode (CREATE TABLE + INSERT, CREATE only, or INSERT only). The tool instantly generates the corresponding SQL in the output panel. You can customize options like batch INSERT mode, primary key selection, and nested JSON handling before copying the result to your clipboard.
How does the type inference work?
The tool analyzes all values in each JSON field across all rows to determine the best SQL column type. Strings map to VARCHAR(255) or TEXT based on length, integers to INT or BIGINT based on magnitude, decimals to FLOAT, booleans to the dialect-specific boolean type (BOOLEAN for PostgreSQL, TINYINT(1) for MySQL, BIT for SQL Server), and null values make the column nullable. ISO 8601 date strings like '2026-03-20' are detected and mapped to DATE or TIMESTAMP types. Nested objects and arrays are stored as JSON/JSONB columns.
What are the differences between SQL dialects?
Each SQL dialect has unique syntax for identifiers, data types, and features. MySQL uses backticks for quoting (e.g., `column_name`), PostgreSQL uses double quotes, SQLite uses double quotes, and SQL Server uses brackets ([column_name]). Boolean types differ: PostgreSQL has native BOOLEAN, MySQL uses TINYINT(1), SQL Server uses BIT. Auto-increment syntax varies: MySQL uses AUTO_INCREMENT, PostgreSQL uses SERIAL, SQLite uses AUTOINCREMENT, and SQL Server uses IDENTITY(1,1). JSON storage also differs: PostgreSQL has JSONB, MySQL has JSON, while SQLite and SQL Server use TEXT/NVARCHAR(MAX).
Is my data secure when using this tool?
Yes, completely. This tool processes everything in your browser using client-side JavaScript. Your JSON data never leaves your computer — there are no server uploads, no API calls, and no data storage. This makes it safe to use with production database data, API responses containing sensitive information, and any confidential data you need to convert to SQL.
How does the tool handle nested JSON objects and arrays?
The tool offers two strategies for nested JSON. The default 'JSON column' strategy stores nested objects and arrays as JSON/JSONB columns in the database, preserving the original structure. The 'Flatten' strategy normalizes the data by creating separate related tables with foreign key relationships — nested objects become flattened columns with prefixed names, and arrays of objects create child tables. Choose JSON columns for quick storage and flexible querying, or flattening for fully normalized relational schema design.
What is batch INSERT mode?
Batch INSERT mode combines multiple rows into a single INSERT statement using multi-value syntax: INSERT INTO table VALUES (...), (...), (...). This is significantly faster than individual INSERT statements when importing large datasets because it reduces the number of round trips to the database. You can configure the batch size (default: 100 rows per statement) to balance between statement size and performance. All major databases support this syntax.
Code Examples
// JSON to SQL converter
function jsonToSql(jsonStr, options = {}) {
const {
tableName = 'my_table',
dialect = 'mysql',
includeCreate = true,
includeInsert = true,
} = options;
const data = JSON.parse(jsonStr);
const rows = Array.isArray(data) ? data : [data];
if (rows.length === 0) return '';
// Collect all column names
const columns = [...new Set(rows.flatMap(Object.keys))];
// Infer types from all rows
const types = {};
for (const col of columns) {
for (const row of rows) {
const val = row[col];
if (val === null || val === undefined) continue;
if (typeof val === 'boolean') types[col] = 'BOOLEAN';
else if (typeof val === 'number') {
types[col] = Number.isInteger(val) ? 'INT' : 'FLOAT';
} else if (typeof val === 'object') types[col] = 'JSON';
else types[col] = 'VARCHAR(255)';
}
if (!types[col]) types[col] = 'VARCHAR(255)';
}
const quote = dialect === 'mysql' ? (n) => `\`${n}\``
: dialect === 'sqlserver' ? (n) => `[${n}]`
: (n) => `"${n}"`;
const parts = [];
if (includeCreate) {
const colDefs = columns.map(c => ` ${quote(c)} ${types[c]}`);
parts.push(`CREATE TABLE ${quote(tableName)} (\n${colDefs.join(',\n')}\n);`);
}
if (includeInsert) {
for (const row of rows) {
const vals = columns.map(c => {
const v = row[c];
if (v === null || v === undefined) return 'NULL';
if (typeof v === 'boolean') return v ? 'TRUE' : 'FALSE';
if (typeof v === 'number') return String(v);
if (typeof v === 'object') return `'${JSON.stringify(v).replace(/'/g, "''")}'`;
return `'${String(v).replace(/'/g, "''")}'`;
});
parts.push(
`INSERT INTO ${quote(tableName)} (${columns.map(quote).join(', ')}) VALUES (${vals.join(', ')});`
);
}
}
return parts.join('\n\n');
}
// Example usage
const json = '[{"id":1,"name":"Alice","active":true},{"id":2,"name":"Bob","active":false}]';
console.log(jsonToSql(json, { tableName: 'users', dialect: 'postgresql' }));