Migrate MySQL to PostgreSQL
As part of Umami v3, we are decommissioning MySQL as a supported database option and standardizing on PostgreSQL. This change ensures a more consistent and scalable data layer across all environments.
Data migration
This document outlines the migration process from MySQL to PostgreSQL, including:
- Preparing MySQL dump file or CSVs.
- Verifying data integrity and application compatibility.
The goal is to make the transition seamless, while preserving all data and ensuring minimal downtime. There are open-source tools like
pgloader and pg_chameleon that you may explore. The process outlined below is a pure data migration into a new v2 PostgreSQL database,
making this method as simple and agnostic as possible.
Requirements
- MySQL Database schema must be in sync with the latest v2 version (v2.19.0).
- Credentials to both databases:
MySQL: mysql://user:password@host:port/dbname
PostgreSQL: postgresql://user:password@host:port/dbnameInstall Umami with a PostgreSQL database
- Install Umami (v2.19.0) on a PostgreSQL server. Documents are the installation process can be found here.
- Truncate the tables below. These tables will be populated from the MySQL import.
truncate table "_prisma_migrations";
truncate table "user";Running migration
MySQL dump file
For smaller databases (less than 1 million records) you can use a MySQL dump file. Official docs can be found here.
- Generate a MySQL dump file.
mysqldump --no-create-info --default-character-set=utf8mb4 --quick --skip-add-locks \
-u username -p mydb > mydbdump.sql- Replace backticks with double quotes to make it PostgreSQL compatible.
sed -i 's/`/"/g' mydbdump.sql- Run the dump file using psql
psql -U username -d mydb < mydbdump.sqlDocker version
docker exec -i my-container psql -U username -d mydb < mydbdump.sqlNow you can upgrade to Umami (v3.0)
Generating CSVs
For larger databases, the dump file may be too large and the insert statement may become inefficient.
- Generate MySQL CSV's for all the necessary tables with data.
Inside the MySQL shell:
SELECT *
INTO OUTFILE '/tmp/website_event.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM website_event;Notes
- The file path (/tmp/users.csv) is on the MySQL server's filesystem, not necessarily your host.
- Make sure MySQL has permission to write there (secure_file_priv may restrict this).
-
Move the file from MySQL server to location accessible by psql using
scporcp. -
Use psql
\COPYcommand to import the CSVs into PostgreSQL database. Repeat command for each table.
psql -U postgres -d mydb -c "\COPY website_event FROM 'website_event.csv' CSV NULL '\N'"Notes
- Include schema if not using default schema.