czwartek, 18 lipca 2013

Information everywhere - overload

In this short post, I would like to share some resources, ideas and personal thoughts how to be 'updated' with information and knowledge form security industry. First of all, I used to keep track of everything that might be useful - quickly requesting for subscription, bookmarking or even making some notes on stickers. It is obvious, that being 'on the same page' with everything and everyone is  impossible. Doing your job and learning are the most important - in free time, some articles,  webinars etc. - but without any pressure. Below, some of resources that I use to stay focused and informed. Those are chosen by me.

Hacker Journals Blogs
 Blog maintained by ec-council (one of leading education/certificating company) with multiple sub-subjects giving all information you need regarding cyber-threats, incident response, breaches and up-to-date content, and news.

SANS blogs
At this site, one can find many security related/categorized subjects -such as security awareness blog, forensics and audit penetration or cloud security. Very comprehensive and 'verified' content giving sorted and grouped source of knowledge/articles.

DarkReading
Site gives great amount of information and details regarding risk, threats and news coming from cyber-world. What is more, here you can find information about new technologies, test, vulnerabilities etc. Furthermore, it seems that information is adequately verified and only giving brief information - of course, all resources are given. Possibly, the quickest portion of news out there.

Webinars
Another type of learning-method, these are 'virtual' lectures. Very often good source of new information, it gives opportunity to hear and get to know professional's opinions, discussions, products and current researches. On the other hand webinars usually  contain some commercial advertisements - which is the only minor drawback, and do not break the lecture structure or value. 

Magazines
The last source of knowledge on this short list goes for e-magazines or those traditional ones. I suppose that the main purpose of magazines is to briefly present some issues, how-to's and reviews. This is rather place for practical stuff,  not theoretical.

Of course, such list cannot cover all available material. There are also many sites giving great source of information regarding open-source projects and frameworks. What is more, very often vendors maintain their own 'blog', it is always good choice to quickly review them. Stay focused, and have fun!

środa, 26 czerwca 2013

Flow Analysis - introduction, analysis considerations

During any type of investigation it is beneficial to get as much different-level evidence as it is possible. As I have previously mentioned, the incident response or digital investigation  is based on multi-level and multi-staged process with backtracking. In another words, firstly the investigator states a hypothesis that then try to confirm it or deny. Then he or she looks for another clues - if previous hypothesis was wrong - and continue this process - very often making the process's path longer - dependently on how good and detailed information is provided.

Basically there are two types of evidence on early stage: network and host -based. I've been writing about basic NBE analysis before, but I have not covered session data analysis  (not sessions rebuilt from raw traffic, but flows captured on network devices). Typically we can split information coming from infrastructure into raw traffic, ‘logs’ and flows. (Problem with logs is the fact that they present only clue in many situation, not showing numbers and data).

Flows do not lie.  Due to network invisibility, knowing how much traffic crosses network is necessity, a proof that  successful network-level communication occurred between hosts.
 
Standard flow definition:

A flow is a series of packets that share the same source and destination IP, source and destination port, and IP protocol /UDP or TCP  (data flow connection between two hosts, can be defined uniquely by its five-tuple factors – Michael W. Lucas). Flow record is a summary of information about specific flow, tracking which two hosts communicated with each other.  Several conclusions:
  • one flow = one direction
  • session != flow (as session contains two flows)
  • Flows do not contain data exchanged between hosts  (no password, usernames, ...) - flow record are small.
  • flow aggregates transmission only for one direction
Versions:

Netflow version 7, includes switching, routing information not available in previous versions - such as  IP of the next  hop address for the flow - to track flow in the  network for different paths. The latest version of the network flow is called IP flow information export (ipfix) - standard.

5-tuple factors (green)
A flow is a series of packets which are described with 5-tuple factors. Let have a look and present flows and its analysis basing on different types of network traffic:
  1. icmp
  2. udp flow
  3. tcp flow
ICMP flows

Basic information about internet routing. ICMP has no TCP-style flags, instead it has ICMP type(general purpose  of packet)  and ICMP code.

type 8 - echo-request
type 0 - echo-response

Two flows : the first one, client creates ICMP request with source-destination IP, and server responds. The sensor holds icmp flows in memory until a time out expires, at which point the sensor marks the flows as completed and transmit them to collector. Quick example, when sending UDP packet to closed port, we may get as response, ICMP packet (no port!)showing that port is unreachable (code 303)

UDP flows

UDP has no codes, types, does not have tcp-like flags, it uses tcp -style ports. It has no built-in concept of a session data, so is described as connectionless. UDP does carry useful application-level data, and most udp traffic is a part of some sort of session or transaction. (example is dns request). As with tcp , the udp request originates from unused port on client side, and uses the standard DNS port 53 as its destination. 2 flows, because udp is connectionless and it does not have tcp-fin flags, sensor waits for timeout and then report to collector.

TCP flows

TCP factors indicates state of connections - requested, ongoing, being torn down. Firstly client choose the unused port exclusively for him.. (...) then send first synchronization  packet with SYN flag. As it is, server response with his first flag sends SYN, and ACK acknowledgement. a single flow shares the same source and destination ip / port, ip protocol. The third packet is the second packet in the first flow. now client can raise get/ post, response in html. Now client can request some data using GET/POST requests (as part of first flow), server response with HTML contents, files, and other formats of data. Packets now stream back and forth ,including ACKs as required to acknowledge receipt of earlier packets. When communication is about to end (or ends), one of participants sends FIN flag, then get ACK/FIN, and again clients confirms with ACK.Sensor sees FIN and ACK and terminates both flows.
 
TCP flow explained

Flow management system can track protocol other than icmp, udp, tcp, but those three comprise the overwhelming majority of network traffic.
5 tuple is a notion employed by network and system administrators in identifying the key requirements to create an operational, secure and bidirectional network connection between two or more local and remote machines. The primary components of 5 tuple are the source and destination address. The former refers to the IP address of the network that created and sent the data packet, while the latter is its recipient.

Definition found on technopedia. When big download, flows can be splitted into several consecutive  flow records (configured timeout - maximum device time, that it can track single flow).
 
Flows visualization

Analysis considerations

Having flows in our security system give us a great view on our infrastructure and communication. Flows never lie, and are reliable piece of information. This kind of data can show us how big was the traffic, if there was a communication at all, if a recipient responded,  and support information taken from logs or other levels. Standard flows also can show us what was the communication, basing on destination port. Remember, that a port assignment is not a proof that a particular protocol was running over that port. What is more, flows (session data too) are  the best for tracking what have happened with infected/compromised host after being owned. On the other hand, there are many flows captured by collectors ,almost for every incident – making analysis harder and more time-consuming.

Firstly, available filters should be applied to help finding any meaningful patterns, clues and answers. Basing on 5-tuple factors, most information can be easily and quickly found. In my opinion,  very often only this information is really needed, or additionally with source/destination bytes.  There are of course other filters, pre-configurated reports, alerts, clearly removing any useless information. Then, what I find really awesome is the option, to visualize flows included in specific time frame or incidents. There are plenty of free-tools giving such capabilities (gephi, afterglow, etherape, graphviz, safemap). For example, please check following http://www.itworld.com. Furthermore, several SIEM’s have capability to make flows, and events smarter, applying many tests (on flows, or aggregated events, or even both) and assigning them special details such as credibility, severity, prioritatization. Another great feature, is to intelligently group flows together to present some suspicious/known pattern, or even show what is unusual.
At the end of the day, the best way to understand flows, and correctly extract potential information from them is to connect different types of data together, link them, correlate and of course practice.

środa, 22 maja 2013

Autopsy 3.0.5 - Forensics Browser for Digital Investigations

Digital investigation is a nested process with backtracking, where – in first stage - we find and test hypotheses that could possibly answer questions about security incident or basically digital scene. This  can be easily understood by any data-analyst regardless of security knowledge, as the general process is not  unique and can be successfully applied for any investigations or data extraction. Lastly I have seen an introduction presentation showing how the big-data analysis is cooperating with standard security model. On one hand, we have security analyst who performs daily-routine incident response process, and tuning. This typically is based on hot and warm data – all delivered by standard SIEM/NBAD technology. The same guy or team is cooperating with data-analyst (who does not have any security knowledge) and operate on level of data-warehouse and cold data. On this stage, new abnormal patterns can be spotted, especially with adequate skills in data-mining and things like that. As you can possibly see right now, this model looks pretty straight forward and can be deployed in big environments, combining different fields and personnel. I will cover this subject later. There are basically two types of investigations (on the level of system  status), live or post-mortem analysis. The so-called ‘live forensics’ is a little bit tricky, and is performed on non-trusted environment, on suspect system, and should be performed with sound methodology, leaving as little fingerprints as possible. On the other hand we have ‘dead analysis’ which should and is run on trusted, prepared laboratory, under control. There are of course, many sub-processes, stages and guidelines, but this was or will be covered in separate posts. I am very happy to call this field of knowledge (specialization) digital-archeology.

In this article, I am presenting and discussing features delivered with Autopsy Forensics Browser – a front end for the TSK (The Sleuth Kit). This tool is essential for forensics investigations for both linux and windows images (NTFS, FAT, HFS+, Ext3, UFS). Please visit project homepage: http://www.sleuthkit.org/. Below definitions for both, dead and live analysis is taken from Autopsy homepage.

Autopsy - main screen
A dead analysis occurs when a dedicated analysis system is used to examine the data from a suspect system. When this occurs, Autopsy and The Sleuth Kit are run in a trusted environment, typically in a lab. Autopsy and TSK provides support for raw, Expert Witness, and AFF file formats.
A live analysis occurs when the suspect system is being analyzed while it is running. In this case, Autopsy and The Sleuth Kit are run from a CD in an untrusted environment. This is frequently used during incident response while the incident is being confirmed. Following confirmation, the system is acquired and a dead analysis performed.

Creating, setting new case and adding evidence images

First of all, we start our investigation by creating new case. There we have Case Wizard, asking us for standard investigator/case names. We need to also fill the Base Case directory path (all stored in one place) and choose which images should be added. We can acquire image/disk with Image Wizard, also having multiple images in one case. Old-school DD and E01 formats are supported. As it is said, during adding, Autopsy will create internal database for image and findings. Additional options are available, such as ‘search in unallocated’ and features that can speed-up process when disabled.

Ingest features

After the evidence data is added, search-modules will start working in the background (delivering results over time). As it can be read on the homepage, the Ingest modules analyze files in a prioritized order, so that files in a user’s directory are analyzed before files in other folders. Recent Activity module, will extract user activity in the operating system and web history. Hash lookup, with combination of NSRL give us a great tool to get rid of false-positive and knows files. Also standard module will calculate hash for every file. Exif (exchangeable image file format) parser and archive extractor are worth to be mentioned. Additionally there is a keyword search feature – for both manual and automatic search.

Additional tips:
 - When clicking on file on directory listing, we can switch to the location of the file in the file system
 - Quick searches can be maintained with keyword search in the right corner of Autopsy

Autopsy workflow

To analyze system artifacts and Autopsy findings we need to follow bottom-down analyze philosophy. It means that firstly we look at tree view, showing system files, results .etc and then going deeper into the file system details with directory listing, and low-level data. It is pretty standard method of analysis, the most intuitive and simple, also commercial solutions use it.
Autopsy - workflow schema
In the most general pane/view there are three trees. The first is called “images’ and here any evidence files/disks/images can be added and review. What is more for each image, the full structure of volumes is presented with detailed information and general file system listings. As mentioned before, for each case more than one image can be added. Then we have Views panels, showing extracted information grouped by different category such as file types, documents and recent files. Another, and last pane shows results which basically  present all parsed information from files system. There are web history content (here we have bookmarks, cookies, downloads and history), keywords and hash-sets hits, installed programs and attached hardware. At the end of the flow we get the most important and detailed information from ‘low-level view’. Dependably on the file tagged, there are result/hex view, image/media view, string and text view giving comprehensive data  parsed from files. What is more you can manage you view, setting them in another ‘look’ and move forward/backward during your analysis, switching between opened windows and panes.

Autopsy Analysis Features
There are many interesting and advanced features delivered with Autopsy, and still the project is being developed.  Below describing functions of each.

Timeline: generally it shows how many changes, events occur over time for each activity on the files system. There is a histogram representation, with zoom in/out capability, and each file can be checked with more detailed information. Really awesome, straight forward feature. This cannot be compared to any commercial stuff out there! Really good in terms of filtering out ‘noise’ in file-system, and checking only relevant files, if time frame is known. MAC times included, searches being run in unallocated space.
Timeline - great feature

Web/Email content: Autopsy supports Firefox, Chrome, IE and Safari browsers. It looks for different types of information such bookmarks, cookies or history of downloads.

Registry: there is some work done to help with registry identification (accessed documents and USB devices), but still no parser for ntuser.dat file. The same can be added about lnk files, which only identify recently accessed docs. For deeper analysis, additional tools and parsers need to be used.

Keyword search and indexing: results from this feature are presented in ‘Keyword hits’. As mentioned on homepage there it powerful text indexing engine used, as well as mechanism of text extraction (furthermore all filres with text files are indexed). By default, Autopsy searches for regular expression such as; email addresses, phone numbers, IP addresses and URL. Additional keyword list can be added (for default automatic searches), also ad-hoc search is available.

Other valuable: EXIF analysis, media( can review video without additional extensions), images, thumbnail viewer, file system analysis, unicode strings extraction from unallocated space and unknown file types.

Reporting: in terms on evidence handling there is a mechanism for tagging/bookmarking all found files and content. Report can be created in different formats (with notes), and also extracting files is possible. Different formats available.

Known ‘Bad Hashes Files” – after enabling hash-lookup, supplying Autopsy with NSRL list and user set of known good/bad files, tool will provide and identify files as good or bad. Nice feature in terms of looking of known threat (unknown not, because there will be to many false-positives). Also lookups based on hashes are a little bit old-school right now, especially when there is only one hash for one file. Of course, when determining if the file has a proper extension, then we do not talk about hashes, but signatures.

Media – in terms of  video/picture review of suspect image, the ‘images’ or ‘thumbnail’ option would be perfect. Author says, that there is great support for drivers, also making the replying and  loading much faster. Sound included.

File Type: All known types of files are sorted out, and additionally added to specific prepared groups (for greater visibility). What is good, is the fact that autopsy will check if the signature is the same as the declared file extension. When adding the possibility of unknown hashes, it presents great feature of tracking suspected files!

Deleted information: Autopsy will  try to recover all deleted files, tag them, and appropriately list in the particular view. This is always a great idea, to present where the file was, and when was deleted. Another essential feature.

Content: Every content – irrespective of type, extension – will be check in terms of strings, ASCII .etc, and presented in different views: raw, hex, text, or images.  This analysis is based on meta-data information, brought by any file.

File search by hash and attributes: Autopsy gives something like a script for searching files based on meta-data. Documents can be found based on md5, hashes, names, size, MAC times and 'known/unknown' status.

Documentation: Sufficient help and infromation is available in help menu in Autopsy itself, also wiki,  data for developers is also here and here.

Autopsy - Media View

To summarize my simple and short description, I would like to share my humble opinion about Autopsy. I have some experience in commercial forensics tools, and I find Autopsy really sufficient, with great features (many not available in commercial), and thoughtful solution. I like its  simplicity,  great dynamic development (Carrier, Venema, Farmer), many ideal ideas, support for investigators and efficiency. There are many essential features in this forensics browser, which really help in extraction evidence and tracking potential findings or hypothesis. When performing investigations I really feel that this particular tool was designed by professional forensics guy, with great experience and academic approach at the same time. There are probably tons of things that I have forgotten to write about, but I am sure that you will find anything you are looking for in Autopsy during digital case. Personally, I will continue testing, and using Autopsy for my personal investigations. What need to be mentioned, that for full and really comprehensive investigation one needs additional tools(parsers!),knowledge, and processes.  Really happy with this solution, and looking forward for new versions and outstanding ideas!

sobota, 11 maja 2013

Proxy Authentication - investigating requests and responses

When describing next-generation threats protection we need to focus on adding additional layer of protection to our defense-in-depth architecture. Later on, I will try to present what I understand by saying ‘next-generation threat’, and why the ‘old-school’ (pattern – matching techniques, AV, IPS, …) approach is not sufficient for this moment. In short, lets monitor outgoing traffic from our company, and be suspicious about every flow – … replying it.  There are several interesting products out there, that provide methods of checking outbound connections, filtering them out, checking malicious activity and delivering also sandbox technology ‘on the wire’.

For this moment, I have prepared two flow-diagrams presenting how internal host (client) is authenticating itself on proxy to ‘get’ something from the internet (server). I took into account, two methods of authentication, the first on relying on NTLM challenge-response, and the second, also NTLM but with support from domain controller. Why I am trying to present it? It seems that the understating  how the traffic is going through the proxy to the outside world is crucial in network forensics investigations. Imagine the situation, that you  have spotted some GET/POST queries to CnC server, and now you need to verify how your system responded to them, basing on proxy logs, captured traffic (outgoing), and hopefully flows (firewall, router outside!). This is possibly helpful, generally to understand how does it work, but also to check if our proxy is susceptible to some kind of obfuscation, defragmentation, and – most important – up to date.

Flow Diagram 1 - NTLM challenge-response authentication

First picture presents NTLM challenge-response authentication method. I have also added step-numbers, to make it easier to read and compare with the description below. These are steps during proxy-authentication, divided into packests:

Firstly, client sends a proxy request for www.intruderit.blogspot.com, using following message (Packet 1):

---------------------------------------------------------- Packet 1 -------------------------------------------------------------------
GET www.intruderit.blogspot.com:80 HTTP/1.0
User-Agent:Mozilla/4.0 (compatible ... )
proxy-Connection: Keep-Alive
Pragma: no-cache

In the response, Proxy sends a “Access Required” message with error code 407 – asking for authentication, then tears down the connection. Access to proxy is denied.

---------------------------------------------------------- Packet 2 -------------------------------------------------------------------
HTTP/1.1 407 Proxy Authentication Required
proxy-Authenticate: NTLM
Connection: close
proxy-Connection: close
Pragma: no-cache
Cache-Control: no-cache
Content-Type: text/html
The client, re-establish the TCP connection – sends request again with NTLM authentication:

---------------------------------------------------------- Packet 3 -------------------------------------------------------------------
GET www.intruderit.blogspot.com:80 HTTP/1.0
User-Agent:Mozilla/4.0 (compatible ...)
proxy-Connection: Keep-Alive
Pragma: no-cache
proxy-Authorization: NTLM + BASE64_Login + HASH

The proxy responds with the following message indicating the denied access and an authentication challenge for the client:

---------------------------------------------------------- Packet 4 -------------------------------------------------------------------
HTTP/1.1 407 ProxyAuthentication Required
proxy-Authenticate: NTLM (…)
Connection: Keep-Alive
proxy-Connection: Keep-Alive
Pragma: no-cache
Content-Type: text/html

---------------------------------------------------------- Packet 5 -------------------------------------------------------------------
GET www.intruderit.blogspot.com:80 HTTP/1.0
User-Agent:Mozilla/4.0 (compatible ...)
proxy-Connection: Keep-Alive
Pragma: no-cache
proxy-Authorization: NTLM + (…) + RESPONSE

---------------------------------------------------------- Packet 6 -------------------------------------------------------------------
HTTP/1.1 200 Connection established


The application data can be exchanged after the NTLM authentication is finished. Without doubts, the way investigations should be run, depends on the specific and current set in client’s infrastructure. There are plenty of different configurations, and proper approach should be chosen during examination. Basing on documentation (here), we can read:

...returns an HTTP reply with status 407 (Proxy Authentication Required). The user agent (browser) receives the 407 reply and then attempts to locate the users credentials. Sometimes this means a background lookup, sometimes a popup prompt for the user to enter a name and password. The name and password are encoded, and sent in the Authorization header for subsequent requests to the proxy.
 
From security point of view, there are several possibilities when tracking outgoing malicious GETs. It the user-agent has already authenticated itself correctly, and malicious URL is technically valid, proxy can block malicious request (if proxy has proper policy set up). In the HTTP proxying, not all requests require authentication. Standard authentication are TCP based, so proxy would only need one authentication on the begging of new session. Subsequent HTTP request would not require authentication. On the other hand if user login/credentials are used ‘correctly’ by malware, new session can be set up again anyway. If the GET request is obfuscated, fragmented, or somehow malformed, proxy will immediately block it. Possibly, we won’t see any responses (allow/deny) from proxy, when the firewall block the communication.

Flow Diagram 2 - authentication with DC
Second picture, present proxy authentication, when there is a domain controller available. For more information for this particular sample please follow this URL.  There is a great article, presenting also samples and other important facts. By the way, quick reminder, check URI/URL difference here.

To be definatelly sure, if the traffic is blocked or not, firstly proxy logs should be checked. For verification flows can be delivered from FW/Router behind the proxy, for further confirmation, and possibly revealing some potential holes. Contacting proxy guys, or networkers is always a good idea.

poniedziałek, 29 kwietnia 2013

SIEM deployment


I would like to start with quick reminder and then share my feelings and comments about SIEM deployment.  Reviewing magazines and  security portals revealed that SIEM subject got so popular,  that everyone is talking about it, constantly exchanging ideas and views. It seems that more often drawbacks and issues as discussed, but on the other hand, the knowledge about security monitoring (NSM) becomes wider and easier accessible. How to understand the potential of security data center, and how to look at SIEM deployment? Let me share my humble opinion.

What is SIEM?

SIEM is responsible for gathering, filtering, normalizing, aggregating , and correlating events coming from multiple data sources, and changing them into valuable information in security data base. Data is stored  as events, flows and records from external feeds and may generate potential security alert – offence. All of these should be performed in real-time, or near real-time.


Logs – this type of information is taken from different operating systems, and network devices. Usually, SYSLOG is used as a standard protocol for pushing logs remotely.  On the other hand, some systems and infrastructure architecture demands installing additional software on remote host, just to allow them to send logs via network. What is more, protocols such OPSEC LEA, SNMP, JDBC/ODBC (pooling) need another way of configuration, and this can be - in some circumstances - a tricky task.

Flows – some SIEM solutions have mechanisms to handle network layer information, and provide details about network flows (session data). These are netflow, JFlow, SFlow. Somehow this capability can be compared to NBAD systems (Network Behavior Anomaly Detection), and we end up with combo SIEM+NBAD.

External feeds – SIEM is building its own static information about each host in the monitored network. We can call it ‘asset’, as it contains data about names, addresses, MAC, timing, usernames, vulnerabilities (vulnerability scanner)  etc. What is more, records from external databases can feed our SIEM and provide additional contextual details to our correlating engine  (blacklisted domains, records for compliance purposes, list of privileged users and so on).

Full package capture technology – several SIEMs have capability to perform deep packet inspection and capture, which provides the ability to identify traffic up to layer 7, and also has content capture capabilities. This means that SIEM can identify applications regardless of port (dynamically allocated ports or tunnels, port-independent applications). What is more, full package is captured – not only headers (session data) but also data.



Why SIEM and who needs it?

I googled the answers, and  I want to quote three of them, as I find them completely  describing the situation, giving interesting reads:

A lot of industries need SIEM for compliance purposes. SIEM is a big part of HIPAA, HITECH, GLBA, and Sarbanes Oxley. The penalties for non-compliance can reach staggering amounts. Beyond the regulatory requirements and penalties, healthcare and financial services organizations face a greater business risk due to volume of sensitive data they store and the associated impact of a damaged reputation related to a breach.
From a business perspective, SIEM is usually a compliance and regulatory  requirement for most certifications. One of the major advantages gleaned  from implementing an SIEM solution is the perspective it brings to the  organization’s security posture, accessibility and the usable metrics it generates. All analysis and dashboards are available on a single console to  aid decision making. Given the security edge an SIEM solution it gives an organization, careful  consideration is due prior to procurement.
The SIEM market is evolving towards integration with business management tools, internal fraud detection, geographical user activity monitoring, content monitoring, and business critical application monitoring. SIEM systems are implemented for compliance reporting, enhanced analytics, forensic discovery, automated risk assessment, and threat mitigation. 
According to RSA survey primary usage of SIEM solution are Security operations, then IT and network operations, ending with compliance and regulatory requirements.

  
SIEM deployment phases

The only phase I want to write about on this stage is the preparation phase. Undoubtedly this is the most crucial part as it contains analyzing needs process,  choosing solution, management software, and others.  According to Gartner and eIQnetworks writings, this stage of planning is very often undervalued and neglected, what leads to delays, expensive mistakes and disappointment. 

The goal is to define what SIEM will be used for. Team and management should focus on choosing the right technology with adequate capabilities that meets their and regulatory requirements.  Are we monitoring infrastructure only for SOX purposes? Do we need to have visibility over all servers in 100 000+ company? Are we responsible for protecting banks, army and other strategic objects? Do we need protection against APT and advanced evasion techniques? What about money and support from support?

Scale estimation is another point. How many log sources will we have? Which machines are generating the biggest traffic? How to deal with logs from FW and other top talkers? Define bottleneck, check network segmentation, and geo-localization. Choosing the right architecture and proper security-infrastructures is essential. How to set collectors and how big disks they have? According to networld magazine, two biggest challenges are: data retention and quick access to data (compression ratio).

Log sources and data quality. Define what information do you want in SIEM. Check and estimate performance on collectors and network. Discuss process of adding log sources with operations teams and support for clarity and understanding.  You may find that crucial log sources cannot be added to your SIEM because some technical issues over network or not sufficient SIEM capabilities.

Processes. Define what processes can be modified, and how to manage them. Prepare list of reports that can be generated by SIEM and check their value. Check your IR process, and see how can it be improved because of SIEM.  Discuss problem of process maturity and additional tools.

Suffering from SIEM Deployments

There is a great work done by several researches and companies, showing problems with SIEM deployment, and what have caused them. Very interesting from management perspective, to see if others have the same issues, and if made the same mistakes.  eIQnetworks conducted a survey, asking for thoughts after deploying standard SIEM in organization. Company published the results and findings:

  • 31 percent of respondents would consider replacing their existing SIEM solution for better cost savings
  • 25 percent of respondents have invested more than a month in professional services since deploying their current SIEM product
  • 52 percent of respondents require 2 or more fulltime employees to manage their current SIEM deployment

RSA came up with the same idea, and ask mid-sized organization a question “If you could change one thing about your current SIEM/log management solution, what would it be?”.

  • Improve alerting to identify only issues that require attention
  • Less expensive
  • Easier to learn and use/ Better user interface

Answers are  very similar to reflections provided by Gartner in “On SIEM Deployment Evolution” article. Gartner concluded with several types or phases of problems. The first one is “stuck in collection” saying, that most companies tend to deploy changes one after another. So when the phase 1 is not completed (adding new log sources) it is not possible to move on to detection and log analysis phase.  “The end result of this is a nice log collection system at 10x the price of a nice log collection system.” Another one “stuck in compliance” describe the situation, when SIEM is bought only to satisfy auditors and regulations: “we have SIEM, we are secure”, nothing more is being done with it. “Stuck in investigations”  type shows the problem with handling incidents, SIEM us used for investigations, not for detection: “organization never matures to security monitoring stage.”

Brightalk webinar on SIEM deployment by Alien Vault

Successful SIEM

I like the statement “it works like a bicycle – you are happy only if you pedal and thus move forward.” According to information found in the net, the best way for a SIEM deployment is constant improvement and development. We are trying to do systematic steps to fully deploy all capabilities of our chosen solution,  having in mind long-term vision. So it is like a combination of operational tasks and strategy.  I suppose, it cannot be called “quick wins process” as SIEM deployment is collection of rather difficult tasks, and demands great engagement, knowledge, budged and skilled personnel.  Checking “The Secret to Successful Global SIEM Deployments” there are high-level steps in planning successful SIEM: The secret to successful global siem deployments
  • Adapt the SIEM to the organizational structure of the business.
  • Create advocates and gain sponsorship within the internal team.
  • Get a complete picture of your networked assets.
Lastly I watched webinar sponsored by Alien Vault (by the way, the best documentation out there!), and great presentation made about SIEM and it's successfull deployment. Not only giving high-level overview, but also describing the situation on the market, constantly evolving threats and methods of 'watching' them. Lecturer has also shown the most important steps in deployment, focusing on uses-cases, incident response and choosing the right solution - marking as the most crucial, and demanding. This talk can be found on brightalk with title "Six Steps to SIEM Success".
Lastly, there was pretty awesome discussion on RSA conference, with statement that there are not many successfully deployed SIEMs , not mentioning these based on Big Data approach. What is more, participants  concluded that, this is not important if we have "big data" or "small data". The most crucial aspect is to focus on incident response processes, processes management, understanding goals and always think about gathered information - how can it be correlated, analysed, what security value is it giving to a bigger picture and investigations. What  I think is, at the end of the day, the only relevant part of any investigation or deployment is human - "human asset".

poniedziałek, 22 kwietnia 2013

Information Security Education


Having stuff like: security awareness campaigns, critical IT security staff, human security vulnerability and behavioral process analysis there may be an easy conclusion such as : education. Obviously it works on several layers  – IT Security pros need to learn continuously – be one step ahead of threats and attackers  -  and end-user should be informed and protected whole time. There are perhaps many books about IT Security Metrics, articles and things mentioned before, but here I would like to focus on security training. We know, that IT Security is a quite wide field of knowledge (developing, changing, …) , combining networking, programming, mathematic, OS’es, and many others specialization – but the questions are, how to be on top and protect company, how to handle self-development, how to manage knowledge flow in team? Probably guys from management, leaders have some ideas and philosophy for handling such issues. On the other hand, what can be done from perspective of each security analyst or specialist is self-study, self-motivation, encouragement for team mate, being proactive and willingness to be involved.  The whole time  – remember the Red Queen Effect?

In my humble opinion there are several ways to live with IT security in mind, develop technical/soft skills every day and have positive attitude to work.  I like to think about it as a place, in which  being smart or educated in not enough (or not the most important) , but having heart and passion.

online lectures - self education

      Security Awareness

This is foundation, baseline and may be developed and repeated all the time. Conferences, online trainings (general-purpose), security blogs, free-talks, presentations .etc Basically I suppose that on general subjects the most time should be spent. Possibly, work-teacher would be the best person to have and ask for help.

      Specialization

Choose what is the most interesting part of security for you and dedicate time and effort to it. Look for literature, websites, certificates and trainings. Plan your own projects.

       Be updated

All knowledge is useless when out-dated. Look for webinars, presentations and follow-ups. Find informational web-pages, subscribe and be engaged. Again, having opportunity to work in big security company is always the best option.

      Share your experience

Motivate your team-colleagues, have educational-meetings, prepare great documentation , be engaged and open-minded.  Exercise soft-skills, communication and cooperation. 

professional trainings - specialization
More than enough is too much, so never forget about 'healthy' attitude, systematic approach, and obligatory rest. Eventually ... protect your dream.


sobota, 6 kwietnia 2013

The Red Queen Effect Feedback Loop


It is more academic style to start the article/lecture with a definition, but for this case such approach will be the most straight forward.  Reading Bruce Schneier’s blog and book ( Liars & Outliers - How does society function when you can’t trust everyone? ) I am constantly finding great thoughts, conclusions, theories – and one of them I would like to comment, share my opinion. Bruce is widely known security pro – especially in the hard core mathematic and cryptography  - but is interesting writer, and shares his opinion about the evolution and the problem of trust in security itself.

Firstly, what is Red Queen Effect?
The Red Queen hypothesis, also referred to as Red Queen's, Red Queen's race or The Red Queen Effect, is an evolutionary hypothesis which proposes that organisms must constantly adapt, evolve, and proliferate not merely to gain reproductive advantage, but also simply to survive while pitted against ever-evolving opposing organisms in an ever-changing environment. Source: (http://en.wikipedia.org/wiki/Red_Queen_hypothesis). Please, check also attached graph showing never-ending war between Prey and Predators - one of the most famous example for Red Queen Effect.

(...) arms race is known as the Red Queen Effect…”It takes all the running you can do, to keep in the same place.” A species has to continuously improve just to survive, and any species that can’t keep up – or bumps up against psychological or environmental constraints – become extinct.
Red Queen Effect 

According to one of Gartner’s articles : Most companies respond to this challenge (Red Queen Effect) with an attitude of “work harder, faster” investing more in core competencies and familiar resources. This is a losing proposition. It amounts to doing more of the same, just at a quicker pace. Rather than accelerating innovation, this approach traps companies in what can be termed the “Red Queen Effect.” In another words: Most companies try to differentiate themselves from their competitors by better products or improved processes leading to a better cost structure. The problem is that their competitors do the same thing at the same time so after a firework of new products the situation is pretty much the same. You moved forward but your competitor also moved forward. If you do not move, you fall behind. 

Now, lets go back to security. So how Red Queen Effect is influencing security and what can be the best example for it?

The scope of defection increases with more technology. This means that the societal pressures we traditionally put in place to limit defections no longer work, and we need to rethink security. It's easy to see this in terms of terrorism: one of the reasons terrorists are so scary today is that they can do more damage to society than the terrorists of 20 years ago could--and future technological developments will make the terrorists of 20 years from now scarier still.



It is now  easily understood that the  old-school competition, dog eat dog and arms race have many disadvantages and in the worst case scenario the level of security and  trust may remain the same. What can be done to find the way to behave correctly without risk (incident response, innovations) and do not end with Red Queen Effect? The last fragment presented here from the book may be the answer or just truth about ourselves:

We are social mammals whose brains are highly specialized for thinking about others. Understanding what others are up to – what they know and want, what they are doing and planning – has been so crucial to the survival of our species that our brains have developed an obsession with all things human. We think about people and their intentions, talk about them; look for and remember them. 
Check http://www.schneier.com/ for more information about the author (Liars and Outliers: Enabling the Trust that Society Needs to Thrive) and read this great book for more fascinating knowledge and paths. Sources: http://blog.business-model-innovation.com, and http://blogs.gartner.com.

sobota, 23 marca 2013

Infrastructure Availability Monitoring


Here, just several words about physical infrastructure monitoring – somehow about one of  the component of CIA triad – availability. It is not new or surprising  that IT guys try to make their jobs easier by applying – to some degree - some automatization. Manual  log revision,  or  checking all servers one by one can be tiring and time consuming task.  Before making any statements let have a look why the monitoring is so important.

Why it is important to monitor physical infrastructure? On in other words, why availability monitoring is so crucial? First – the most prosaic one – because administrators want to know that their servers are working and performing tasks. In fact this is the most important argument – and the most difficult. Secondly, to check servers resiliency. What is the system load, what is the performance and how the servers can be used to increase efficiency. This also refers to the problem of virtualization and costs – which at the same time are additional arguments. One may say, that compliance/ regulations are arguments for availability monitoring – nobody wants their data to be lost due to downtime on machines (software and hardware). Security aspect – checking how the system is behaving, and looking for anomalies may be the indicator of DoS attack or any IT Sabotage going on. The last one, is testing purposes – it is sometimes nice to have  estimation how the particular appliance/software is working, if it is resource-demanding and so on.


Why it is important to monitor physical infrastructure?
-  servers resiliency (load distribution, load balancers)
-  virtualization (still performance issue)
- security (anomalies, downtime)
- regulations (data loss)
- costs
- development

There are of course tons of good software (open-source and commercial) that are designed to monitor different factors of our machines. Those are: load average, CPU, disk capacity, IN/OUT on multiple interfaces and others. Many of tools use round-robin databases (additionally use simple in configuration SNMP)  which is great option when there is no enough time for maintenance – such applications just work. It is of course possible to use more sophisticated scripts (using ssh for communication) which can report what is going on with particular processes on the remote server. And here starts interesting discussion. How far should we push administrators for monitoring? Should we monitor every working process on servers, or only the most important ones? 

The work flow is simply. We ‘add’ remote machines to our monitoring server, and present performance and other factors as simple graphs (aggregated!) , with colors or even sound warnings. The worst idea is to view tables with numbers, but it can also be applied, may be with additional visualizations or meaningful color (red/green?). To such solution it would be beneficial to add  some threshold alerts (moving average, moving average with dynamic deviation) and configure emails for notice/ warning and alert levels.  In my humble opinion, this is still not enough.


I would like to make here, a one step back. Imagine that you have really important process working on your remote machines. This process is responsible for collecting data or anything critical for your service. Due to performance issues, or unexpected bug in application (such bugs tend to reveal only when the system is overloaded, and the ‘catch-up’ behavior can be spotted) the ‘really important’ process stopped and your graphs do not show the problem! The solutions in this case is easy – why the machine was overloaded before, and why nobody solved it? What is the point of ‘monitoring the monitoring tool’ when nobody looks  and care for warnings? Human factor is crucial, and even the best availability monitoring tools cannot replace a man.

Combination of  dashboards, warning system, daily checks and human consciousness is the best option when dealing with infrastructure monitoring.