How I met the optimization and other stories
Hello again, I'm gonna tell you a story about an emulator that becomes 5x faster during one day. In the beginning there was an disassembler and a virtual execution environment. The disassembler liked the environment so much that they got together one day and the framework for our emulator was born. It was growing day by day, line by line - up to 20k+ lines of code - and here the "problem" begins.
Once a project (emulator in this case) reaches such complexity, there's a non-zero probability to contain some bottlenecks. So we spent some time on benchmarking it. The very first instruction flow was around 7k insn/ms. What to imagine behind this number? To clean some virus families we need to emulate e.g. 300k instructions - so there would be a 300/7 = 42 ms delay on each cleaned file. Count it all together for thousands of files and you'll definitely need to make it faster.
After fixing some hot spots in the code to be as light as possible we got the number of 9k insn/ms. That's a bit better, but frankly - we expected more. So, where's the problem? Surprisingly - the "optimize for speed" (/O2) option in MSVC actually does not optimize the code that much and I guess also other optimization options (such as omit frame pointers) are ignored in this configuration. Well, my mistake, better to try than to expect anything. So after another tuning we got much higher instruction flow - 18k insn/ms - and only a simple change from "optimize for speed" to "full optimization" (/Ox) did the trick :-D. I couldn't believe.
Last step was to implement an instruction cache (similar to the one used by processor itself). This trick increased the instruction flow to 38k insn/ms (in some cases), which is roughly 5x faster than before. So, now our equation gives only 300/38 = 7 ms delay on cleaned files. The conclusion for you is - don't worry to clean your files, it should run pretty quickly. Btw: I still don't know how many ppl from our user base already noticed the presence of cleaning routines in v5. If they did and tried, there would be no Parite and other samples in our FP submission system.
And now something less technical:
Another thing that is worth to mention is a case of AXA Financial. As one of our users pointed out (http://forum.avast.com/index.php?topic=60547.0), their Indonesian site was hacked and a malcode was injected. That's a quite hard knock for those who think "I'm not looking for porn and warez, I'm completely safe", as you can see - even legit and respected sites may be hacked. Fortunately, our users were protected:
When I went to the site yesterday, everything was fine again (but I didn't check it meanwhile, so I don't know whether their reaction was prompt enough), anyway - thanks AXA for cleaning it. You'll surely tighten the security of your servers to avoid such attacks ;-). And of course thanks to the users who helped us (and not only us) with reporting such incidents.
Far from sci-fi depictions, artificial intelligence – through machine learning algorithms and big data – is key to defusing today's evolving cyberthreats.
Defeating today’s – and tomorrow's – cybercriminals requires man and machine to collaborate, intelligently. This is the heart of next-gen cybersecurity.