I feel like we could make our operating system more secure and make things easier for researchers by simply making a normal OS look like a virtual machine. Any program that needs to access resources in a non-virtualized way would have to ask for permission first. If granted, it could then see the relevant information or access the necessary APIs.
This way, malware authors would have to choose between making things easier for researchers or targeting far fewer people.
Either way, everyone except the malware creators wins.
Genode / SculptOS[0] go this direction. Before starting any process, you craft a view of the hardware resources it will see. Applications come with resource request definitions which you can satisfy by attaching real, virtual, or null resource.
It's a pretty neat system; runs Doom, so we know it's production ready; and the source is meticulously organized.
The docs try to be overly general, IMHO, clouding the core ideas. If you're interested, I recommend just spinning up a VM and mucking about, along with the user guide.
Anti-cheat software vendors would lose as well. I prefer the software I run to know its place, but there are enough people who enjoy multiplayer games that hate cheaters more than they hate what amounts to spyware.
I wonder if gaming cyber cafes that have no input ports that only play against another PCs of the same cyber franchise would be a sustainable business venture "no cheaters, no need to install spyware in your own device, warm coffee brought to your table just by clicking a desktop shortcut"
This is basically the value proposition for game consoles, for both players and game developers. For the player, secure gaming-specialized device that they don't have to set up and manage in the way a general-purpose computer requires. For the developer, a standardized trustable platform where they can avoid much of the anti-cheat complexity by letting the platform maintain security instead.
The other important incentive would be games that cannot be cheated, I saw a few games on steam that have reviews informing potential buyers that the games have been ruined because the devs didn't implement a successful anti-cheat system.
That doesn't seem like it would be possible, if you want all the convenient hooks in VM's for them to be able to integrate with and be usable from the host system.
The solution really does seem like implementing those same hooks in non-VM environments, but preventing their actual usage behind permissions. In a VM, the permissions could genuinely be granted or denied. In a non-VM they would always be denied. But malware could never be able to tell why it was denied permission.
This is a huge, huge, huge amount of work. Even the most obvious things -- like "can you run a VM?" -- can require huge support, in that case even from the hardware, when you want to do them within a VM.
This way, malware authors would have to choose between making things easier for researchers or targeting far fewer people.
Either way, everyone except the malware creators wins.