CDRs slow? Experiencing crashes when accessing cdrs? Add indexes to v_xml_cdr!

A couple of thoughts on this based on my experience...

1. Make sure you use the latest version of Fusion. Mark removed the code that did a SELECT (*) Count on the v_xml_cdr table. This is a very slow query if your table is large.

2. Clean out the JSON information from the v_xml_cdr table a few days after the call. No need to keep all the freeswitch variables forever.
Option 1: One big query. This will eventually become inefficient when your data set becomes too large.
Code:
psql --host=127.0.0.1 --username=fusionpbx -c "UPDATE v_xml_cdr SET json = NULL WHERE start_stamp < NOW() - INTERVAL '7 days'"
Option 2: Chunk the updates with a script.
Code:
#!/bin/sh
export PGPASSWORD="xxxxx"
START=1
DIV=100

#Get the total number of records from the database
TOTAL=`psql --host=127.0.0.1 --username=fusionpbx -t -c "SELECT count(*) FROM v_xml_cdr WHERE json ? 'variables' and start_stamp < NOW() - INTERVAL '3 days'"`

#Divide the toal number by 100 since we are doing limit 100
END=$(echo $((TOTAL/DIV)) | awk '{print int($0)}')
echo $END

#Loop through the records and set the json value equal NULL
i=$START
while [[ $i -le $END ]]
do
psql --host=127.0.0.1 --username=fusionpbx -t -c "WITH cte AS (SELECT uuid FROM v_xml_cdr WHERE json ? 'variables' and start_stamp < NOW() - INTERVAL '3 days' limit 100) UPDATE v_xml_cdr AS s SET json = NULL FROM cte WHERE s.uuid = cte.uuid"
((i = i + 1))
done
3. Utilize a CDR archive database. Export your v_xml_cdr table to another database server each night. You can then enable the archive feature in Default Settings. I believe these are the settings that you need.
Code:
['cdr']['archive_database']['boolean'] == 'true')
['cdr']['archive_database_driver']['text'];
['cdr']['archive_database_host']['text'];
['cdr']['archive_database_type']['text'];
['cdr']['archive_database_port']['text'];
['cdr']['archive_database_name']['text'];
['cdr']['archive_database_username']['text'];
['cdr']['archive_database_password']['text'];

Here's the script for moving the records to the archive server...
Code:
#!/bin/sh
# - 10.9.9.9 is the archive IP
# - 10.8.8.8 is the main fusion database IP
# run this from the archive server

#copy the data from the fusion db to a local csv file
psql --host=10.8.8.8 --username=fusionpbx -c "\copy (SELECT * FROM v_domains) TO '/tmp/domains.csv' WITH CSV"
psql --host=10.8.8.8 --username=fusionpbx -c "\copy (SELECT * FROM v_fax_logs) TO '/tmp/fax_logs.csv' WITH CSV"
psql --host=10.8.8.8 --username=fusionpbx -c "\copy (SELECT * FROM v_xml_cdr) TO '/tmp/xml_cdr.csv' WITH CSV"
psql --host=10.8.8.8 --username=fusionpbx -c "\copy (SELECT * FROM v_conference_sessions) TO '/tmp/conference_sessions.csv' WITH CSV"
psql --host=10.8.8.8 --username=fusionpbx -c "\copy (SELECT * FROM v_conference_session_details) TO '/tmp/conference_session_details.csv' WITH CSV"

#Insert the data into the cdr server
# - create a temp tables
# - copy the csv data to the temp tables
# - insert data from the temp table to the real tables
# - delete the temp tables
# - remove the json data from the cdrs. too much space


psql --host=10.9.9.9 --username=fusionpbx << EOF

CREATE TEMP TABLE tmp_domains AS SELECT * FROM v_domains WITH NO DATA;
CREATE TEMP TABLE tmp_fax_logs AS SELECT * FROM v_fax_logs WITH NO DATA;
CREATE TEMP TABLE tmp_xml_cdr AS SELECT * FROM v_xml_cdr WITH NO DATA;
CREATE TEMP TABLE tmp_conference_sessions AS SELECT * FROM v_conference_sessions WITH NO DATA;
CREATE TEMP TABLE tmp_conference_session_details AS SELECT * FROM v_conference_session_details WITH NO DATA;

COPY tmp_domains FROM '/tmp/domains.csv' DELIMITER ',' CSV HEADER;
COPY tmp_fax_logs FROM '/tmp/fax_logs.csv' DELIMITER ',' CSV HEADER;
COPY tmp_xml_cdr FROM '/tmp/xml_cdr.csv' DELIMITER ',' CSV HEADER;
COPY tmp_conference_sessions FROM '/tmp/conference_sessions.csv' DELIMITER ',' CSV HEADER;
COPY tmp_conference_session_details FROM '/tmp/conference_session_details.csv' DELIMITER ',' CSV HEADER;

INSERT INTO v_domains SELECT DISTINCT ON (domain_uuid) * FROM tmp_domains ON CONFLICT DO NOTHING;
INSERT INTO v_fax_logs SELECT DISTINCT ON (fax_log_uuid) * FROM tmp_fax_logs ON CONFLICT DO NOTHING;
INSERT INTO v_xml_cdr SELECT DISTINCT ON (uuid) * FROM tmp_xml_cdr ON CONFLICT DO NOTHING;
INSERT INTO v_conference_sessions SELECT DISTINCT ON (conference_session_uuid) * FROM tmp_conference_sessions ON CONFLICT DO NOTHING;
INSERT INTO v_conference_session_details SELECT DISTINCT ON (conference_session_detail_uuid) * FROM tmp_conference_session_details ON CONFLICT DO NOTHING;

DROP TABLE tmp_domains;
DROP TABLE tmp_fax_logs;
DROP TABLE tmp_xml_cdr;
DROP TABLE tmp_conference_sessions;
DROP TABLE tmp_conference_session_details;

UPDATE v_xml_cdr SET json = NULL;

EOF

#remove the csv files
rm /tmp/domains.csv
rm /tmp/fax_logs.csv
rm /tmp/xml_cdr.csv
rm /tmp/conference_sessions.csv
rm /tmp/conference_session_details.csv
4. Be REALLY careful about adding indexes to a large data set when using Postgres BDR. It will take a very long time.


Just for reference. We have anywhere from 1.5 to 3 million CDR's in the main fusion database. We keep JSON data for 3 days before clearing it. CDR's are kept in the main database for 30 days. Customers needing older records need to access the archive server (using the "Archive" button in the gui). We have about 11+ million records on the archive server. Everything is smoking fast!

Hope this helps!
 
Last edited:
Archive server is just in master branch right now. I think i wrote the PR just after 4.4 was released. The stuff that Mark wrote around the time of this update was the game changer for the Call Detail Records page speed. That is where he stripped out the SELECT COUNT and reworked the Next Page button at the bottom of the screen.

I should also note for the record that I don't use BDR on the archive server. I would recommend a Postgres standby server.