The purpose of data compression in the PI server is to save disk space. I heard a story from the CEO of OSIsoft that the first PI server used a 10 Megabyte hard drive and in the 80's, that hard drive cost a $250,000 dollars.
And as hard drives became easier to make and the cost per megabyte plummeted, people think that the data compression is a legacy component that isn't worth thinking about. In fact, I've had people think throwing money at the problem makes it go away. The problem doesn't go away, and here's why:
The value of PI comes from putting expert eyeballs on trends. If it takes longer to load trends because the archive is filled with uncompressed and redundant data, then those eyeballs are going to view less trends. The cost of curiosity increases every so slightly and over time, you lose.
From an IT perspective, liberal compression settings means more hard disk consumption. I've seen a GMP plant use 300 megabytes per day. that's 100 gigabytes a year. "Hold on," you say, "100 GB SSD hard drive will cost you $150... that's less than the cost of the Change Record!" True... but over time keeping years of archive data online means you're going to need to keep upgrading the hardware.
Backing up the same amount of data will cost 10X the time. It's just unwieldy, especially when you're talking about simply setting:
compdev > 0.
Think about your data compression. Do some research on what they ought to be. In fact, get the Zymergi whitepaper on OSI PI compdev and excdev emailed to you for free.
PI data compression is a set-it-and-forget-it activity. Do it right the first time and you basically never have to think about it again.