28 October 2019
By Dr Irving Hofman
RYUK is a sophisticated cryptoware with ransom requests of the order of half a million US dollars. The healthcare sector in Australia has become a specific target. Recently, the Exigence cybersecurity incident response team was called upon to recover the IT systems of an unfortunate victim.
The perpetrators behind RYUK are active adversaries who combine advanced attack techniques with interactive, hands-on hacking to increase their rate of success. To date they’ve netted over 4 million US dollars.
This is a first-hand account of cryptoware in action. The story begins when our helpdesk received a call after-hours from the IT manager of an ASX listed organisation in the healthcare sector.
Victim: “NONE OF MY SERVERS ARE BOOTING. THEY ALL HAVE AN ERROR MESSAGE “AN OPERATING SYSTEM WAS NOT FOUND”.
They’d just suffered a power outage, so we didn’t suspect anything malicious. As we rule out hardware issues and the like, further investigation revealed that all the physical/virtual servers had been cryptolocked. 20 servers all up. 100 workstations were also cryptolocked. Basically, everything is offline and the entire company cannot operate.
Exigence: OK, let’s restore the servers from backups. You have backups, don’t you?
Victim: “Yes, backups are stored on a NAS device.”
We log onto the NAS. Oh no! There’s no backups on the NAS. They’ve all been deleted. Not encrypted but deleted!
With RYUK, the cyber-criminals purposely remove your safety net, disabling all your recovery mechanisms. This was not some automated attack. Someone manually hacked the servers, gained full domain administrative privileges, audited the entire IT Infrastructure to discover what backup systems were in place and purposely deleted them.
Exigence: “You have off-site backups, don’t you?”
Victim: “Yes, of course. Our backups are replicated to the cloud with controls on retention.
We log onto the cloud platform. Oh no! The backups in the cloud are 6 months old. Replication hasn’t been working. That’s an issue for another day. However, things are looking very bleak! It looks like 6 months’ worth of data is gone forever.
After some further investigation, we discovered their SAN also had a copy of the backups. The cyber-criminals missed it. Phew! More luck than brains. We all thank our lucky stars that we’ve been thrown a life-line.
Before we commence restoring backups, we disconnect every workstation from the network to prevent reinfections. We lock down the firewall so servers cannot access the Internet. Then we start the slow and arduous process of restoring server backups one-by-one. We decide to restore from a backup 3 days before the infection took place.
We’ve been on the job for 4 days now, and we’ve restored most of the servers. Unfortunately, the backups on one of the servers wasn’t operational, so they’ve lost 6 months of data on that one. They don’t have a DR system, so it all a slow, manual and arduous task. Nobody can work yet because they haven’t got any clean workstations to work on. That’s a separate task for next week. The productivity losses from 100 staff who can’t work is immense.
We’ve still got more work to do on the servers. We need to forensically analyse the restored servers to ensure there’s no malware still lurking in the background, ready to re-infect them. And we still need to wipe and rebuild the entire fleet of workstations.
What lessons can we learn from all this?
Disclosure: This organization does not have an Exigence Managed Services and/or Managed Security Services contract.