środa, 12 grudnia 2012

Live Response vs. Memory Image analysis


It will be desirable to start with some non-technical statement, given by Malin, Casey and Aquilina - before continuing to more 'digital' part of this article, namely:
“When malware  is discovered on a system, the importance of organized methodology, sound analysis, steady documentation, and attention to evidence dynamics all outweigh the severity of any time pressure to investigate.”
Here,  I would like to focus on the evidence dynamics, data preservation and collection. Other aspects - mentioned previously - are quiet obvious and probably getting more and more intuitive over time. Malware and hackers are getting stronger, and more often use anti-forensics methods to cover their actions and delete fingerprints. Just to list several examples: encryption, packing, detection of virtual environments, halting when PEB detected .etc. What is more deleting log files, flushing caches, generating fake traffic also are common and used. Another definition, the evidence dynamics, I would describe as the way that information disappears. I think that the OOV theory and this concept can be safely combined together. When acquiring evidence from subject system, we know what volatile data have relatively short life span, and any activity done by us can alter this type of information. When carelessly collecting data, also this non-volatile can be distracted, and loose its credibility. Although, there are also ways in which alteration on the system can be minimized. Checking how forensics tools are working, what alteration are they causing, what registers are used and so on, can distinguish between  malware/intruder and investigator’s tools.
Having that in mind, there are several ways of live response to the incident. To remind you, we could do live response with toolkits and frameworks, acquire memory dump, or both – depending on the situation. But okay, you may ask – what should be first? Live or memory imaging?
The more tools and processes you run, the more alteration on the system you cause. This is obvious. The situation is getting more dangerous, when investigator does not know how his tools work. At the same time these tools can crash,  and overwrite digital evidence. Tragedy. On the other hand anti-forensics methods may be applied, or even rootkit used – in this situation using system or own tools may provide fake information. What is more, when collecting data on the subject system we can overwrite another segments of file system, overwriting potential evidence. Furthermore,  sometimes we do mistakes, so omissions during investigating may happen. All listed arguments can be concluded with:

Before collecting any volatile data on the subject system, first acquire a memory dump.

This concept is strongly supported in the modern literature. In my personal opinion, I can agree that firstly acquiring memory dump we are avoiding omissions, missteps, but not surely minimizing alteration on the victim machine. University of Concordia, and one of CERTs conducted tests, in which they tried to check which method (live or memory imaging) causes more alteration on the system. It seems that impact on the system was bigger when live response was made. More pages in the system have been altered. On the other hand the difference between this two approaches was getting smaller when the newer system have been taken into account. In other source I found that new generation of tools are making less fingerprints that in years before – respectively. In my humble opinion, much remains to be done in this particular field, and many test should be conducted. For that moment, I believe that memory imaging as the first call is the best action that can be taken during incident response. 

Brak komentarzy:

Prześlij komentarz