In search of the perfect instruction

Michal Krejdl 16 Feb 2016

In search of the perfect instruction

Knowing the language of common microprocessors is essential for the work of virus analysts across the AV industry.

Each program you run - clean, malicious, no matter - is actually a set of commands (called instructions) specific for particular processors. These instructions can be very simple, e.g. addition of two numbers, but we can see very complex cryptographic functions as well.

As the processor architecture evolves in time, it becomes more and more complicated and understanding or decoding the language is more difficult. It (hypothetically) does not have to be like this, but there's a hell called backward compatibility.

proc_comic

Sooner or later, popular products such as (the majority of) desktop computers with the x86 processor need some innovative development to meet future requirements. Sometimes the amount of innovation is so vast, that it could easily form a completely new product. That's the decision point - to be, or not to be - are you going to start a new product line and throw away the old platform, or will you stick with the old solution and keep the backward compatibility by hook or by crook?

Intel actually tried both ways.They admitted that there's a need to make fundamental changes in the architecture and not limit themselves with the 20 year old shackle of 8086 processor.

So they started with the Itanium series - a completely different processor without the old limits. But the majority of common applications were written/compiled for traditional x86 architecture and Itanium has never made it to the "mainstream".

To be honest, Itanium's primary focus is a sphere of enterprise servers and high-end solutions, but it was a chance to make a big change with an impact also to traditional desktop systems. However, it would mean a successive conversion of current users to the new platform, motivating developers to write applications (browsers, media players, office suites...) for that platform, etc. This never happened and Itanium remains a enterprise solution. The x86 ecosystem was, and still is, so strong, that it was a necessity to paste all innovations to the old architecture. If you can't start a greenfield project, you will always end up finding the lesser of two evils.

What are the drawbacks to Avast virus analysts?

The language of x86 processors is called x86 assembler. If we want to understand it, we must decode it with our x86 disassembler. This is a crucial part of static analysis, emulation, dynamic translation; weapons used by, I would guess, all antivirus engines to fight malware.

Having such a disassembler means having over 16,000 lines of C code and data, including padding and formatting in our case. It could be much shorter, if there were no logical exceptions from the decoding scheme, reusing prefixes for different purposes and giving them completely different meaning, etc. With such circumstances, writing a reliable, fast, and small disassembler is really difficult and with each "paste-to-old-architecture" innovation it gets closer to impossible. It's going to be either big, slow, or not that reliable by design, because the x86/x64 architecture is so rich and in-homogeneous.

What should be done about x86

Here's my point. It's OK to add native support for AES and other cryptographic functions, it's useful, but this is not the perfect instruction. I would really like to see the disasm instruction. Once the architecture is so complicated and the opcode map so messed up and there's no way back, why not let Intel engineers deal with it?

We have a saying in Czech Republic, loosely translated: "Let them eat, what they cooked." It would be so nice if a processor was able to provide us with its own native decoding capability. It would be so nice if we did not have to walk through the whole instruction set reference and find which part was twisted this time to fit new demands along with old shackles. After all, we could have smaller, faster, and the most reliable code (because who's supposed to know x86/x64 processors better than their architects?).

So, Intel engineers, for the sake of all emulator programmers, will you pick up the gauntlet and implement the perfect instruction? :-)

--> -->