Understanding Network Profiling In Cybersecurity

Network Profiling In Cybersecurity: Facts To Note

 

In order to detect serious security incidents, it is important to understand, characterize, and analyze information about normal network functioning. Networks, servers, and hosts all exhibit typical behaviour for a given point in time. Network and device understanding network profiling in cybersecurity
Care must be taken when capturing baseline data so that all normal network operations are included in the baseline. In addition, it is important that the baseline is current. It should not include network performance data that is no longer part of normal functioning.
For example, rises in-network utilization during periodic server backup operations is part of normal network functioning and should be part of the baseline data. However, measurement of traffic that corresponds to outside access to an internal server that has been moved to the cloud would not be.
A means of capturing just the right period for baseline measurement is known as sliding window anomaly detection. It defines a window that is most representative of network operation and deletes data that is out of date.
This process continues with repeated baseline measurements to ensure that baseline measurement statistics depict network operation with maximum accuracy.

 

Increased utilization of WAN links at unusual times can indicate a network breach and exfiltration of data. Hosts that begin to access obscure internet servers, resolve domains that are obtained through dynamic DNS, or use protocols or services that are not needed by the system user can also indicate compromise. Deviations in network behavior are difficult to detect if normal behavior is not known.

 

Tools like NetFlow and Wireshark can be used to characterize normal network traffic characteristics. Because organizations can make different demands on there networks depending on the time of day or day of the year, network baselining should be carried out over an extended period. The figure displays some questions to ask when establishing a network baseline.

 

Image is a cloud. At the top, left corner of the image is a textbox connected to the cloud that is labelled Session Duration. The textbox contains the question: What is the average time between the establishment of a data flow and it’s termination? At the top right corner of the image is a textbox connected to the cloud that is labelled Total Throughput.
The textbox contains the question: What is the average amount of data passing from a given source to a given destination in a given period of time? At the bottom left corner of the image is a textbox connected to the cloud that is labelled Port used.
The textbox contains the question: What is the list of acceptable TCP or UDP processes that are available to accept data? At the bottom right corner of the image is a textbox connected to the cloud that is labelled Critical asset address space. The textbox contains the question: What is the IP address space of critical assets owned by the organization?

 

Elements of a Network Profile

The table lists important elements of the network profile.
Network Profile Element Description
Session duration This is the time between the establishment of a data flow and it’s termination.
Total throughput This is the amount of data passing from a given source to a given destination in a given period of time.
Ports used This is a list of TCP or UDP processes that are available to accept data.
Critical asset address space These are the IP addresses or the logical location of essential systems or data.
In addition, a profile of the types of traffic that typically enter and leave the network is an important tool in understanding network behavior. Malware can use unusual ports that may not be typically seen during normal network operation.
Host-to-host traffic is another important metric. Most network clients communicate directly with servers, so an increase of traffic between clients can indicate that malware is spreading laterally through the network.

 

Finally, changes in user behavior, as revealed by AAA, server logs, or a user profiling system like Cisco Identity Services Engine (ISE) is another valuable indicator. Knowing how individual users typically use the network leads to detection of potential compromise of user accounts.
A user who suddenly begins logging in to the network at strange times from a remote location should raise alarms if this behavior is a deviation from a known norm.

Server Profiling

Server profiling is used to establish the accepted operating state of servers. A server profile is a security baseline for a given server. It establishes the network, user, and application parameters that are accepted for a specific server.

 

In order to establish a server profile, it is important to understand the function that a server is intended to perform in a network. From there, various operating and usage parameters can be defined and documented.
The table lists elements of a server profile.
Server Profile Element Description
Listening ports These are the TCP and UDP daemons and ports that are normally allowed to be open on the server.
Logged in users and accounts These are the parameters defining user access and behaviour.
Service accounts These are the definitions of the type of service that an application is allowed to run.
Software environment These are the tasks, processes, and applications that are permitted to run on the server.

Network Anomaly Detection

Network behaviour is described by a large amount of diverse data such as the features of a packet flow, features of the packets themselves, and telemetry from multiple sources. One approach to the detection of network attacks is the analysis of this diverse, unstructured data using Big Data analytics techniques. This is known as network behaviour analysis (NBA).

 

This entails the use of sophisticated statistical and machine learning techniques to compare normal performance baselines with network performance at a given time. Significant deviations can be indicators of compromise. In addition, network behaviour can be analyzed for known network behaviours that indicate compromise.

 

Anomaly detection can recognize network traffic caused by worm activity that exhibits scanning behaviour. Anomaly detection also can identify infected hosts on the network that are scanning for other vulnerable hosts.
The figure illustrates a simplified version of an algorithm designed to detect an unusual condition at the border routers of an enterprise.

For example, the cybersecurity analyst could provide the following values:

  • X = 5
  • Y = 100
  • Z = 30
  • N = 500

Now, the algorithm can be interpreted as: Every 5th minute, get a sampling of 1/100th of the flows during second 30. If the number of flows is greater than 500, generate an alarm. If the number of flows is less than 500, do nothing. This is a simple example of using a traffic profile to identify the potential for data loss.

 

In addition to statistical and behavioural approaches to anomaly detection is rule-based anomaly detection. Rule-based detection analyzes decoded packets for attacks based on pre-defined patterns.

Network Vulnerability Testing

Most organizations connect to public networks in some way due to the need to access the internet. These organizations must also provide internet-facing services of various types to the public. Because of the vast number of potential vulnerabilities, and the fact that new vulnerabilities can be created within an organization network and it’s internet-facing services, periodic security testing is essential.

 

The table lists various types of tests that can be performed.
Term Description
Risk Analysis
  • This is a discipline in which analysts evaluate the risk posed by vulnerabilities to a specific organization.
  • Risk analysis includes an assessment of the likelihood of attacks, identifies types of likely threat actors, and evaluates the impact of successful exploits on the organization.
Vulnerability Assessment
  • This test employs software to scan internet-facing servers and internal networks for various types of vulnerabilities.
  • These vulnerabilities include unknown infections, weaknesses in web-facing database services, missing software patches, unnecessary listening ports, etc.
  • Tools for vulnerability assessment include the open-source OpenVAS platform, Microsoft Baseline Security Analyzer, Nessus, Qualys, and FireEye Mandiant services.
  • Vulnerability assessment includes, but goes beyond, port scanning.
Penetration Testing
  • This type of test uses authorized simulated attacks to test the strength of network security.
  • Internal personnel with hacker experience, or professional ethical hackers, identify assets that could be targeted by threat actors.
  • A series of exploits are used to test the security of those assets.
  • Simulated exploit software tools are frequently used.
  • Penetration testing does not only verify that vulnerabilities exist, it actually exploits those vulnerabilities to determine the potential impact of a successful exploit.
  • An individual penetration test is often known as a pen test.
  • Metasploit is a tool used in penetration testing.
  • CORE Impact offers penetration testing software and services.
The table lists examples of activities and tools that are used in vulnerability testing.
Activity Description Tools
Risk analysis Individuals conduct a comprehensive analysis of the impacts of attacks on core company assets and functioning Internal or external consultants, risk management frameworks
Vulnerability Assessment Patch management, host scans, port scanning, other vulnerability scans and services OpenVas, Microsoft Baseline Analyzer, Nessus, Qualys, Nmap
Penetration Testing Use of hacking techniques and tools to penetrate network defences and identify the depth of potential penetration Metasploit, CORE Impact, ethical hackers
Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNAIJA Is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

       
Fact Check Policy

Common Vulnerability Scoring System: Facts To Note

 

The Common Vulnerability Scoring System (CVSS) is a risk assessment tool that is designed to convey the common attributes and severity of vulnerabilities in computer hardware and software systems. The third revision, CVSS 3.0, is a vendor-neutral, industry-standard, open framework for weighting the risks of a vulnerability using a variety of metrics. These weights combine to provide a score of the risk inherent in a vulnerability. The numeric score can be used to determine the urgency of the vulnerability, and the priority of addressing it. The benefits of the CVSS can be summarized as follows:

  • It provides standardized vulnerability scores that should be meaningful across organizations.
  • It provides an open framework with the meaning of each metric openly available to all users.
  • It helps prioritize risk in a way that is meaningful to individual organizations.

 

The Forum of Incident Response and Security Teams (FIRST) has been designated as the custodian of the CVSS to promote it’s adoption globally. The Version 3 standard was developed with contributions by Cisco and other industry partners. Version 3.1 was released in June of 2019. The figure displays the specification page for the CVSS at the FIRST website.

 

CVSS Metric Groups

Before performing a CVSS assessment, it is important to know key terms that are used in the assessment instrument.
Many of the metrics address the role of what the CVSS calls an authority. An authority is a computer entity, such as a database, operating system, or virtual sandbox, that grants and manages access and privileges to users.
The image displays the CVSS Metric Groups. There are three boxes shown side by side. The first box, on the left, is titled Base Metric Group.
Within this box are two columns: Exploitability metrics and Impact metrics. Under the Exploitability column are four items: attack vector, attack complexity, privileges required, and user interaction.
Under the Impact column are three items: confidentiality impact, integrity impact and availability impact. Spanning both columns at the bottom is Scope. The second box, in the middle, is titled Temporal Metric Group.
This box contains three items: Exploit code maturity, remediation level, and report confidence. The third box, at the right, are four boxes: Modified Base Metrics, confidentiality requirement, integrity requirement, and availability requirement.

CVSS Metric Groups

This represents the characteristics of a vulnerability that are constant over time and across contexts. It has two classes of metrics:

  • Exploitability – These are features of the exploit such as the vector, complexity, and user interaction required by the exploit.
  • Impact metrics – The impacts of the exploit are rooted in the CIA triad of confidentiality, integrity, and availability.

CVSS Base Metric Group

Criteria Description
Attack vector This is a metric that reflects the proximity of the threat actor to the vulnerable component. The more remote the threat actor is to the component, the higher the severity. Threat actors close to your network or inside your network are easier to detect and mitigate.
Attack complexity This is a metric that expresses the number of components, software, hardware, or networks, that are beyond the attacker’s control and that must be present for a vulnerability to be successfully exploited.
Privileges required This is a metric that captures the level of access that is required for a successful exploit of the vulnerability.
User interaction This metric expresses the presence or absence of the requirement for user interaction for an exploit to be successful.
Scope This metric expresses whether multiple authorities must be involved in an exploit. This is expressed as whether the initial authority changes to a second authority during the exploit.
The Base Metric Group Impact metrics increase with the degree or consequence of loss due to the impacted component. The table lists the impact metric components.
Term Description
Confidentiality Impact This is a metric that measures the impact to confidentiality due to a successfully exploited vulnerability. Confidentiality refers to the limiting of access to only authorized users.
Integrity Impact This is a metric that measures the impact on integrity due to a successfully exploited vulnerability. Integrity refers to the trustworthiness and authenticity of the information.
Availability Impact This is a metric that measures the impact to availability due to a successfully exploited vulnerability. Availability refers to the accessibility of information and network resources. Attacks that consume network bandwidth, processor cycles, or disk space all impact the availability.

The CVSS Process

The CVSS Base Metrics Group is designed as a way to assess security vulnerabilities that are found in software and hardware systems. It describes the severity of a vulnerability based on the characteristics of a successful exploit of the vulnerability. The other metric groups modify the base severity score by accounting for how the base severity rating is affected by time and environmental factors.
The CVSS process uses a tool called the CVSS v3.1 Calculator, shown in the figure.
The calculator is like a questionnaire in which choices are made that describe the vulnerability for each metric group. After all choices are made, a score is generated. Pop-up text that explains each metric and metric value is displayed by hovering the mouse over each. Choices are made by choosing one of the values for the metric. Only one choice can be made per metric.
The CVSS calculator can be accessed on the CVSS portion of the FIRST website.
A detailed user guide that defines metric criteria, examples of assessments of common vulnerabilities, and the relationship of metric values to the final score is available to support the process.
After the Base Metric group is completed, the numeric severity rating is displayed, as shown in the figure.
A vector string is also created that summarizes the choices made. If other metric groups are completed, those values are appended to the vector string.
The string consists of the initial(s) for the metric, and an abbreviated value for the selected metric value separated by a colon. The metric-value pairs are separated by slashes. The vector strings allow the results of the assessment to be easily shared and compared.
The table lists the key for the Base Metric group.
Metric Name Initials Possible Values Values
Attack Vector AV [N, A, L, P] N = Network
A = Adjacent
L = Local
P = Physical
Attack Complexity AC [L, H] L = Low
H = High
Privileges Required PR [N, L, H] N = None
L = Low
H = High
User Interaction UI [N, R] N = None
R = Required
Scope S [U, C] U = Unchanged
C = Changed
Confidentiality Impact C [H, L, N] H = High
L = Low
N = None
Integrity Impact I [H, L, N] H = High
L = Low
N = None
Availability Impact A [H, L, N] H = High
L = Low
N = None
The values for the numeric severity rating string CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:L/A:N are listed in the table.
Metric Name Values
Attack Vector, AV Network
Attack Complexity, AC Low
Privileges Required, PR High
User Interaction, UI None
Scope, S Unchanged
Confidentiality Impact, C Low
Integrity Impact, I Low
Availability Impact, A None
In order for a score to be calculated for the Temporal or Environmental metric groups, the Base Metric group must first be completed. The Temporal and Environmental metric values than modify the Base Metric results to provide an overall score. 
Image depicts the interaction of scores for the metric groups. At the top left of the graphic are the Base Metric Group Metrics, set by vendor: once set doesn’t change. An arrow connects the Metrics to a cloud representing the base formula. An arrow points from the cloud to a circle representing the base score.
On the left, under the Base Metric Group is the Temporal Metric Group, set by vendor: once set, changes with time. An arrow connects the Temporal Metric Group Metrics to another cloud, representing the temporal formula.
The temporal formula uses the Temporal Metrics and the Base Score to create the Temporarily Adjusted Score. On the left, under the Temporal Metric Group, are the Environmental Metric Group Metrics, optionally set by end-users. An arrow connects the Environmental Metric Group metrics to a cloud representing the Environmental Formula.
The Environmental Formula uses the Environmental Metric Group Metrics and the Temporarily Adjusted score to create the Environmentally Adjusted Score. Source: www.first.org

CVSS Reports

The ranges of scores and the corresponding qualitative meaning is shown in the table.
Rating CVSS Score
None 0
Low 0.1 – 3.9
Medium 4.0 – 6.9
High 7.0 – 8.9
Critical 9.0 – 10.0

Frequently, the Base and Temporal metric group scores will be supplied to customers by the application or security vendor in whose product the vulnerability has been discovered. The affected organization completes the environmental metric group to tailor the vendor-supplied scoring to the local context.

The resulting score serves to guide the affected organization in the allocation of resources to address the vulnerability. The higher the severity rating, the greater the potential impact of an exploit and the greater the urgency in addressing the vulnerability. While not as precise as the numeric CVSS scores, the qualitative labels are very useful for communicating with stakeholders who are unable to relate to the numeric scores.

In general, any vulnerability that exceeds 3.9 should be addressed. The higher the rating level, the greater the urgency for remediation.

Other Vulnerability Information Sources

There are other important vulnerability information sources. These work together with the CVSS to provide a comprehensive assessment of vulnerability severity. There are two systems that operate in the United States:
Common Vulnerabilities and Exposures (CVE)
This is a dictionary of common names, in the form of CVE identifiers, for known cybersecurity vulnerabilities. The CVE identifier provides a standard way to research a reference to vulnerabilities. When a vulnerability has been identified, CVE identifiers can be used to access fixes. In addition, threat intelligence services use CVE identifiers, and they appear in various security system logs. The CVE Details website provides a linkage between CVSS scores and CVE information. It allows browsing of CVE vulnerability records by CVSS severity rating.
Search the internet for Mitre for more information on CVE as shown in the figure.
National Vulnerability Database (NVD)
This utilizes CVE identifiers and supplies additional information on vulnerabilities such as CVSS threat scores, technical details, affected entities, and resources for further investigation. The database was created and is maintained by the U.S. government National Institute of Standards and Technology (NIST) agency.
Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

       
Fact Check Policy

Risk Management In Cybersecurity: Facts To Note

Risk management in cybersecurity involves the selection and specification of security controls for an organization. It is part of an ongoing organization-wide information security program that involves the management of the risk to the organization or to individuals associated with the operation of a system.
The image is a diagram of the Risk Management Process. There are five small circles, arranged in a circle representing the risk management process. Each circle is connected to the next by arrows pointing clockwise.
Within the top circle is Risk Identification: identify assets, vulnerabilities, threats. The second circle is Risk Assessment: score, weigh, prioritize risks. In the third circle is Risk Response Planning: determine risk response, plan actions. In the fourth circle is Response Implementation: implement the response. In the fifth circle is Monitor and Assess Results: continuous risk monitoring and response evaluation. The arrow points back to the first box.

A Risk Management Process

Risk is determined as the relationship between threat, vulnerability, and the nature of the organization. It first involves answering the following questions as part of a risk assessment:

  • Who are the threat actors who want to attack us?
  • What vulnerabilities can threat actors exploit?
  • How would we be affected by attacks?
  • What is the likelihood that different attacks will occur?

 
NIST Special Publication 800-30 describes risk assessment as:
…the process of identifying, estimating, and prioritizing information security risks. Assessing risk requires the careful analysis of threat and vulnerability information to determine the extent to which circumstances or events could adversely impact an organization and the likelihood that such circumstances or events will occur.
 
The full publication is available for download from NIST.
A mandatory activity in risk assessment is the identification of threats and vulnerabilities and the matching of threats with vulnerabilities in what is often called threat-vulnerability (T-V) pairing. The T-V pairs can then be used as a baseline to indicate risk before security controls are implemented. This baseline can then be compared to ongoing risk assessments as a means of evaluating risk management effectiveness. This part of risk assessment is referred to as determining the inherent risk profile of an organization.
 
After the risks are identified, they may be scored or weighted as a way of prioritizing risk reduction strategies. For example, vulnerabilities that are found to have corresponded with multiple threats can receive higher ratings. In addition, T-V pairs that map to the greatest institutional impact will also receive higher weightings.
 
The table lists the four potential ways to respond to risks that have been identified, based on there weightings or scores.
 

Risk Description
Risk avoidance
  • Stop performing the activities that create risk.
  • It is possible that as a result of a risk assessment, it is determined that the risk involved in an activity outweighs the benefit of the activity to the organization.
  • If this is found to be true, than it may be determined that the activity should be discontinued.
Risk reduction
  • Decrease the risk by taking measures to reduce vulnerability.
  • This involves implementing management approaches discussed earlier in this chapter.
  • For example, if an organization uses server operating systems that are frequently targeted by threat actors, risk can be reduced through ensuring that the servers are patched as soon as vulnerabilities have been identified.
Risk sharing
  • Shift some of the risks to other parties.
  • For example, a risk-sharing technique might be to outsource some aspects of security operations to third parties.
  • Hiring security as a service (SECaaS) CSIRT to perform security monitoring is an example.
  • Another example is to buy insurance that will help to mitigate some of the financial losses due to a security incident.
Risk-retention
  • Accept the risk and it’s consequences.
  • This strategy is acceptable for risks that have a low potential impact and relatively high cost of mitigation or reduction.
  • Other risks that may be retained are those that are so dramatic that they cannot really be avoided, reduced, or shared.

Vulnerability Management

According to NIST, vulnerability management is a security practice that is designed to proactively prevent the exploitation of IT vulnerabilities that exist within an organization. The expected result is to reduce the time and money spent dealing with vulnerabilities and the exploitation of those vulnerabilities.
Proactively managing vulnerabilities of systems will reduce or eliminate the potential for exploitation and involve considerably less time and effort than responding after exploitation has occurred.
Vulnerability management requires a robust means of identifying vulnerabilities based on vendor security bulletins and other information systems such as CVE.
Security personnel must be competent in assessing the impact, if any, of vulnerability information they have received. Solutions should be identified with effective means of implementing and assessing the unanticipated consequences of implemented solutions. Finally, the solution should be tested to verify that the vulnerability has been eliminated.
Image is a diagram of the Vulnerability Management Life Cycle. There are six small circles, arranged in a larger circle representing phases in the Vulnerability Management Lifecycle.
Each circle is connected to the next by arrows pointing clockwise. The phases shown in the circles are Discover, Prioritize Assets, Assess, Report, Remediate, and Verify. The last arrow points back to the Discover phase.

Vulnerability Management Life Cycle

Inventory all assets across the network and identify host details, including operating systems and open services, to identify vulnerabilities. Develop a network baseline. Identify security vulnerabilities on a regular automated schedule.

Asset Management

Asset management involves the implementation of systems that track the location and configuration of networked devices and software across an enterprise. As part of any security management plan, organizations must know what equipment accesses the network, where that equipment is within the enterprise and logically on the network, and what software and data those systems store or can access.
Asset management not only tracks corporate assets and other authorized devices but also can be used to identify devices that are not authorized on the network.
NIST specifies in publication NISTIR 8011 Volume 2, the detailed records that should be kept for each relevant device. NIST describes potential techniques and tools for operationalizing an asset management process:

 

  • Automated discovery and inventory of the actual state of devices
  • Articulation of the desired state for those devices using policies, plans, and procedures in the organization’s information security plan
  • Identification of non-compliant authorized assets
  • Remediation or acceptance of device state, possible iteration of desired state definition
  • Repeat the process at regular intervals, or ongoing

Mobile Device Management

Mobile device management (MDM), especially in the age of BYOD, presents special challenges to asset management. Mobile devices cannot be physically controlled on the premises of an organization. They can be lost, stolen, or tampered with, putting data and network access at risk. Part of an MDM plan is acting when devices leave the custody of the responsible party.
Measures that can be taken include disabling the lost device, encrypting the data on the device, and enhancing device access with more robust authentication measures.
Due to the diversity of mobile devices, it is possible that some devices that will be used on the network are inherently less secure than others. Network administrators should assume that all mobile devices are untrusted until they have been properly secured by the organization.
MDM systems, such as Cisco Meraki Systems Manager, shown in the figure, allow security personnel to configure, monitor and update a very diverse set of mobile clients from the cloud.

Configuration Management

Configuration management addresses the inventory and control of hardware and software configurations of systems. Secure device configurations reduce security risk. For example, an organization provides many computers and laptops to it’s workers. This enlarges the attack surface for the organization, because each system may be vulnerable to exploits.
To manage this, the organization may create baseline software images and hardware configurations for each type of machine. These images may include a basic package of required software, endpoint security software, and customized security policies that control user access to aspects of the system configuration that could be made vulnerable. Hardware configurations may specify the permitted types of network interfaces and the permitted types of external storage.
Configuration management extends to the software and hardware configuration of networking devices and servers as well. As defined by NIST, configuration management:
Comprises a collection of activities focused on establishing and maintaining the integrity of products and systems, through control of the processes for initializing, changing, and monitoring the configurations of those products and systems.
NIST Special Publication 800-128 on configuration management for network security is available for download from NIST.
For internetworking devices, software tools are available that will backup configurations, detect changes in configuration files, and enable bulk change of configurations across a number of devices.
With the advent of cloud data centres and virtualization, the management of numerous servers presents special challenges. Tools like Puppet, Chef, Ansible, and SaltStack enable efficient management of servers that are used in cloud-based computing.

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 
 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

       
Fact Check Policy

Information Security Management System: Facts To Note

An Information Security Management System (ISMS) consists of a management framework through which an organization identifies, analyzes, and addresses information security risks.
ISMSs are not based on servers or security devices. Instead, an ISMS consists of a set of practices that are systematically applied by an organization to ensure continuous improvement in information security. ISMSs provide conceptual models that guide organizations in planning, implementing, governing, and evaluating information security programs.

 

ISMSs are a natural extension of the use of popular business models, such as Total Quality Management (TQM) and Control Objectives for Information and Related Technologies (COBIT), into the realm of cybersecurity.
An ISMS is a systematic, multi-layered approach to cybersecurity. The approach includes people, processes, technologies, and the cultures in which they interact in a process of risk management.An ISMS often incorporates the “plan-do-check-act” framework, known as the Deming cycle, from TQM. It is seen as an elaboration on the process component of the People-Process-Technology-Culture model of organizational capability, as shown in the figure.
The image shows a general model for organizational capability. The diagram on the left side of the image depicts the People, Process, Technology, Culture model. The four components of the model are shown in a ring with Capability at the centre. There are arrows pointing both ways between all of the components.
The Process component is expanded out into another graphic on the right side of the image. In the expanded view, the four steps in the plan-do-check-act framework are shown in a clockwise circle surrounding the text: Develop, Improve, Maintain, ISMS.

A General Model for Organizational Capability

ISO-27001

ISO is the International Organization for Standardization. ISO’s voluntary standards are internationally accepted and facilitate business conducted between nations.
ISO partnered with the International Electrotechnical Commission (IEC) to develop the ISO/IEC 27000 series of specifications for ISMSs, as shown in the table.
Standard Description
ISO/IEC 27000 Information security management systems – Overview and vocabulary – Introduction to the standards family, overview of ISMS, essential vocabulary.
ISO/IEC 27001 Information security management systems– Requirements – Provides an overview of ISMS and the essentials of ISMS processes and procedures.
ISO/IEC 27003 Information security management system implementation guidance – Critical factors necessary for successful design and implementation of ISMS.
ISO/IEC 27004 Information security management – Monitoring, measurement, analysis and evaluation – Discussion of metrics and measurement procedures to assess the effectiveness of ISMS implementation.
ISO/IEC 27005 Information security risk management – Supports the implementation of ISMS based on a risk-centred management approach.
The ISO 27001 certification is a global, industry-wide specification for an ISMS. The figure illustrates the relationship of actions stipulated by the standard with the plan-do-check-act cycle.
In the figure, the four steps in the plan-do-check-act framework are shown in a clockwise circle surrounding the text: Develop, Improve, Maintain, ISMS.

ISO 27001 ISMS Plan-Do-Check-Act Cycle

  • Understand relevant business objectives
  • Define scope of activities
  • Access and manage support
  • Assess and define risk
  • Perform asset management and vulnerability assessment
ISO-27001 certification means an organization’s security policies and procedures have been independently verified to provide a systematic and proactive approach for effectively managing security risks to confidential customer information.

NIST Cybersecurity Framework

NIST is very effective in the area of cybersecurity, as we have seen in this module. More NIST standards will be discussed later in the course.
NIST has also developed the Cybersecurity framework which is similar to the ISO/IEC 27000 standards.
The NIST framework is a set of standards designed to integrate existing standards, guidelines, and practices to help better manage and reduce cybersecurity risk. The framework was first issued in February 2014 and continues to undergo development.
The framework core consists of a set of activities suggested to achieve specific cybersecurity outcomes, and references examples of guidance to achieve those outcomes. The core functions, which are defined in the table, are split into major categories and subcategories.
Core Function Description
IDENTIFY Develop an organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities.
PROTECT Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services.
DETECT Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event.
RESPOND Develop and implement the appropriate activities to act on a detected cybersecurity event.
RECOVER Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity event.
The major categories provide an understanding of the types of activities and outcomes related to each function, as shown in the next table.
Core Function Outcome Categories
IDENTIFY
  • Asset Management
  • Business Environment
  • Governance
  • Risk Assessment
  • Risk Management Strategy
PROTECT
  • Identity Management and Access Control
  • Information Protection Processes and Procedures
  • Maintenance
  • Protective Technology
DETECT
  • Anomalies and Events
  • Security Continuous Monitoring
  • Detection Processes
RESPOND
  • Response Planning
  • Communications
  • Analysis
  • Mitigation
  • Improvements
RECOVER
  • Recovery Planning
  • Improvements
  • Communications
Organizations of many types are using the Framework in a number of ways. Many have found it helpful in raising awareness and communicating with stakeholders within their organization, including executive leadership. The Framework is also improving communications across organizations, allowing cybersecurity expectations to be shared with business partners, suppliers, and among sectors. By mapping the Framework to current cybersecurity management approaches, organizations are learning and showing how they match up with the Framework’s standards, guidelines, and best practices. Some parties are using the Framework to reconcile internal policy with legislation, regulation, and industry best practice. The Framework also is being used as a strategic planning tool to assess risks and current practices.
Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNUGGETS is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

       
Fact Check Policy

Ways Of Monitoring Syslog And NTP Protocols Effectively

 

Various protocols that commonly appear on networks have features that make them of special interest in security monitoring. For example, Syslog and Network Time Protocol (NTP) are essential to the work of the cybersecurity analyst. In this article, I will be talking about how to use Syslog and NTP protocols effectively.
 
The Syslog standard is used for logging event messages from network devices and endpoints, as shown in the figure. The standard allows for a system-neutral means of transmitting, storing, and analyzing messages. Many types of devices from many different vendors can use Syslog to send log entries to central servers that run a Syslog daemon. This centralization of log collection helps to make security monitoring practical. Servers that run Syslog typically listen on UDP port 514.

 

Because Syslog is so important to security monitoring, Syslog servers may be a target for threat actors. Some exploits, such as those involving data exfiltration, can take a long time to complete due to the very slow ways in which data is secretly stolen from the network. Some attackers may try to hide the fact that exfiltration is occurring.

They attack Syslog servers that contain the information that could lead to detection of the exploit. Hackers may attempt to block the tractor of data from Syslog clients to servers, tamper with or destroy log data, or tamper with the software that creates and transmits log messages. The next generation (ng) Syslog implementation, known as Syslog-ng, offers enhancements that can help prevent some of the exploits that target Syslog.

 
Search the internet for more information about Syslog-ng..
The figure shows a cloud of network devices on the right with two event messages arrow pointing to a compiled logs syslog server. To the right of this server is a security monitoring station that has a viewed arrow also pointing to the server.

NTP

Syslog messages are usually timestamped. This allows messages from different sources to be organized by time to provide a view of network communication processes. Because the messages can come from many devices, it is important that the devices share a consistent time clock. One way that this can be achieved is for the devices to use Network Time Protocol (NTP).
NTP uses a hierarchy of authoritative time sources to share time information between devices on the network, as shown in the figure. In this way, device messages that share consistent time information can be submitted to the syslog server. NTP operates on UDP port 123.
Because events that are connected to an exploit can leave traces across every network device on there path to the target system, timestamps are essential for detection. Threat actors may attempt to attack the NTP infrastructure in order to corrupt time information used to correlate logged network events.
This can serve to obfuscate traces of ongoing exploits.
In addition, threat actors have been known to use NTP systems to direct DDoS attacks through vulnerabilities in client or server software. While these attacks do not necessarily result in corrupted security monitoring data, they can disrupt network availability.

 

The figure shows a cloud with an authoritative time source icon and an arrow pointing to a local n t p server. The server has individual arrows pointing to a firewall, layer 3 switch, layer 2 switch, and router. Each device has a clock icon beside it.

DNS

Domain Name Service (DNS) is used by millions of people daily. Because of this, many organizations have less stringent policies in place to protect against DNS-based threats than they have to protect against other types of exploits. Attackers have recognized this and commonly encapsulate different network protocols within DNS to evade security devices.
DNS is now used by many types of malware. Some varieties of malware use DNS to communicate with command-and-control (CnC) servers and to exfiltrate data in traffic disguised as normal DNS queries. Various types of encoding, such as Base64, 8-bit binary, and Hex can be used to camouflage the data and evade basic data loss prevention (DLP) measures.

 

For example, malware could encode stolen data as the subdomain portion of a DNS lookup for a domain where the nameserver is under control of an attacker. A DNS lookup for ‘long-string-of-exfiltrated-data.example.com’ would be forwarded to the nameserver of example.com, which would record ‘long-string-of-exfiltrated-data’ and reply back to the malware with a coded response.
 
The exfiltrated data is the encoded text shown in the box. The threat actor collects this encoded data, decodes and combines it, and now has access to an entire data file, such as a username/password database.
It is likely that the subdomain part of such requests would be much longer than usual requests. Cyber analysts can use the distribution of the lengths of subdomains within DNS requests to construct a mathematical model that describes normality.
They can than use this to compare there observations and identify abuse of the DNS query process. For example, it would not be normal to see a host on your network sending a query to aW4gcGxhY2UgdG8gcHJvdGVjdC.example.com.
DNS queries for randomly generated domain names, or extremely long random-appearing subdomains, should be considered suspicious, especially if there occurrence spikes dramatically on the network. DNS proxy logs can be analyzed to detect these conditions. Alternatively, services such as the Cisco Umbrella passive DNS service can be used to block requests to suspected CnC and exploit domains.

 

The figure shows a computer with a bug on it with an arrow pointing to four separated n s queries all going to example.com. words under the queries: base 64 coded exfiltrated data disguised as subdomains. An arrow goes from the queries to a cloud with a compromised d n s server inside it.

HTTP and HTTPS

Hypertext Tractor Protocol (HTTP) is the backbone protocol of the World Wide Web. However, all information carried in HTTP is transmitted in plaintext from the source computer to the destination on the internet. HTTP does not protect data from alteration or interception by malicious parties, which is a serious threat to privacy, identity, and information security. All browsing activity should be considered to be at risk.
A common exploit of HTTP is called iFrame (inline frame) injection. Most web-based threats consist of malware scripts that have been planted on webservers. These webservers than direct browsers to infected servers by loading iframes. In iFrame injection, a threat actor compromises a webserver and plants malicious code which creates an invisible iFrame on a commonly visited webpage.
When the iFrame loads, malware is downloaded, frequently from a different URL than the webpage that contains the iFrame code. Network security services, such as Cisco Web Reputation filtering, can detect when a website attempts to send content from an untrusted website to the host, even when sent from an iFrame.
The client p c on the left has a green, purple, and orange line going to a trusted website. The green line has a magnifying glass with the letter I inside it. Words under the lines: Cisco web reputation filtering applies to the requested webpage and all frames. Words under website icon: accessed webpage calls frames from other sites. Three are the same coloured lines leaving the website going to three servers with the words web servers not affiliated with the trusted site may house malicious software.

HTTP iFrame Injection Exploit

To address the alteration or interception of confidential data, many commercial organizations have adopted HTTPS or implemented HTTPS-only policies to protect visitors to there websites and services.
HTTPS adds a layer of encryption to the HTTP protocol by using secure socket layer (SSL), as shown in the figure. This makes the HTTP data unreadable as it leaves the source computer until it reaches the server. Note that HTTPS is not a mechanism for web server security. It only secures HTTP protocol traffic while it is in transit.

 

The figure shows has three major columns with a source client laptop over the left column, h t t p s protocols over the middle column, and a destination server icon over the right column. Each column has five text boxes. Left column: application, highlighted s s l / t l s, transport, network, and data link. In the centre column: h t t p, encryption, t c p, IP, and ethernet. In the final column: application, highlighted s s l / t l s, transport, network, data link.
An arrow goes from the source client through the boxes on the left down to a network media textbox at the bottom. An arrow goes from this same textbox through all the boxes within the right column to the destination server.
Unfortunately, the encrypted HTTPS traffic complicates network security monitoring. Some security devices include SSL decryption and inspection; however, this can present processing and privacy issues. In addition, HTTPS adds complexity to packet captures due to the additional messaging involved in establishing the encrypted connection. This process is summarized in the figure and represents additional overhead on top of HTTP.
A p c on the left has an arrow that goes to a server in the cloud on the right: client browser requests a secure page with https://. An arrow goes from the server to the p c: the web server sends it’s public key with it’s certificate. The next section: client browser ensures that the certificate is unexpired or unrevoked and was issued by a trusted party.

An arrow goes from the client pc to a server in a cloud: the client browser creates a symmetric key and sends it to the server. Next section: web server decrypts the symmetric key using it’s private key. An arrow goes from the cloud server to the p c:web server uses the symmetric key to encrypt the page and sends it to the client. At the bottom: the client browser uses the symmetric key to decrypt the page and display the information to the user.

 

HTTPS Transactions

Email Protocols

Email protocols such as SMTP, POP3, and IMAP can be used by threat actors to spread malware, exfiltrate data, or provide channels to malware CnC servers.
 
SMTP sends data from a host to a mail server and between mail servers. Like DNS and HTTP, it is a common protocol to see leaving the network. Because there is so much SMTP traffic, it is not always monitored.
However, SMTP has been used in the past by malware to exfiltrate data from the network. In the 2014 hack of Sony Pictures, one of the exploits used SMTP to exfiltrate user details from compromised hosts to CnC servers.
This information may have been used to help develop exploits of secured resources within the Sony Pictures network. Security monitoring could reveal this type of traffic based on features of the email message.

 

IMAP and POP3 are used to download email messages from a mail server to the host computer. For this reason, they are the application protocols that are responsible for bringing malware to the host. Security monitoring can identify when a malware attachment entered the network and which host it first infected. Retrospective analysis can than track the behaviour of the malware from that point forward.

 

In this way, the malware behaviour can better be understood and the threat identified. Security monitoring tools may also allow recovery of infected file attachments for submission to malware sandboxes for analysis.

 

The figure shows an infected host on the left and an arrow pointing to two servers in a cloud labelled C n C servers and a threat actor by them. The arrow is labeled s m t p data exfiltration. Another arrow goes from the servers back to the infected host labelled pop 3 /  imap malware infection.

Email Protocol Threats

ICMP

ICMP has many legitimate uses, however, ICMP functionality has also been used to craft a number of types of exploits. ICMP can be used to identify hosts on a network, the structure of a network, and determine the operating systems at use on the network. It can also be used as a vehicle for various types of DoS attacks.
ICMP can also be used for data exfiltration. Because of the concern that ICMP can be used to surveil or deny service from outside of the network, ICMP traffic from inside the network is sometimes overlooked. However, some varieties of malware use crafted ICMP packets to tractor files from infected hosts to threat actors using this method, which is known as ICMP tunnelling.
Search the internet for a detailed explanation of the well-known LOKI exploit.
Note: This site might be blocked by your institution’s firewall.
A number of tools exist for crafting tunnels. Search the internet for Ping Tunnel to explore one such tool.
 

Understanding Access Control List In Cybersecurity

 

Many technologies and protocols can have impacts on security monitoring. Access Control Lists (ACLs) are among these technologies. ACLs can give a false sense of security if they are overly relied upon. ACLs, and packet filtering in general, are technologies that contribute to an evolving set of network security protections.

 

The figure illustrates the use of ACLs to permit only specific types of Internet Control Message Protocol (ICMP) traffic. The server at 192.168.1.10 is part of the inside network and is allowed to send ping requests to the outside host at 209.165.201.3. The outside host’s return ICMP traffic is allowed if it is an ICMP reply, source quench (tells the source to reduce the pace of traffic), or any ICMP unreachable message.

All other ICMP traffic types are denied. For example, the outside host cannot initiate a ping request to the inside host. The outbound ACL is allowing ICMP messages that report various problems. This will allow ICMP tunnelling and data exfiltration.

 

Attackers can determine which IP addresses, protocols, and ports are allowed by ACLs. This can be done either by port scanning, penetration testing or through other forms of reconnaissance. Attackers can craft packets that use spoofed source IP addresses. Applications can establish connections on arbitrary ports. Other features of protocol traffic can also be manipulated, such as the established flag in TCP segments. Rules cannot be anticipated and configured for all emerging packet manipulation techniques.

 

In order to detect and react to packet manipulation, more sophisticated behaviour and context-based measures need to be taken. Cisco Next-Generation firewalls, Advanced Malware Protection (AMP), and email and web content appliances are able to address the shortcomings of rule-based security measures.

 

Mitigating ICMP Abuse

NAT and PAT

Network Address Translation (NAT) and Port Address Translation (PAT) can complicate security monitoring. Multiple IP addresses are mapped to one or more public addresses that are visible on the internet, hiding the individual IP addresses that are inside the network (inside addresses).
The figure illustrates the relationship between internal and external addresses that are used as source addresses (SA) and destination addresses (DA). These internal and external addresses are in a network that is using NAT to communicate with a destination on the internet. If PAT is in effect, and all IP addresses leaving the network use the 209.165.200.226 inside global address for traffic to the internet, it could be difficult to log the specific inside device that is requesting and receiving the traffic when it enters the network.
This problem can be especially relevant with NetFlow data. NetFlow flows are unidirectional and are defined by the addresses and ports that they share. NAT will essentially break a flow that passes a NAT gateway, making flow information beyond that point unavailable. Cisco offers security products that will “stitch” flows together even if the IP addresses have been replaced by NAT.

Network Address Translation

Encryption, Encapsulation, and Tunneling

As mentioned with HTTPS, encryption can present challenges to security monitoring by making packet details unreadable. Encryption is part of VPN technologies. In VPNs, a commonplace protocol like IP, is used to carry encrypted traffic. The encrypted traffic essentially establishes a virtual point-to-point connection between networks over public facilities. Encryption makes the traffic unreadable to any other devices but the VPN endpoints.
Similar technology can be used to create a virtual point-to-point connection between an internal host and threat actor devices. Malware can establish an encrypted tunnel that rides on a common and trusted protocol, and use it to exfiltrate data from the network. A similar method of data exfiltration was discussed previously for DNS.

Peer-to-Peer Networking and Tor

In peer-to-peer (P2P) networking, shown in the figure, hosts can operate in both client and server roles. Three types of P2P applications exist file-sharing, processor sharing, and instant messaging. In file-sharing P2P, files on a participating machine are shared with members of the P2P network. Examples of this are the once-popular Napster and Gnutella. Bitcoin is a P2P operation that involves the sharing of a distributed database, or ledger, that records Bitcoin balances and transactions. BitTorrent is a P2P file-sharing network.

 

Any time that unknown users are provided access to network resources, security is a concern. File-sharing P2P applications should not be allowed on corporate networks. P2P network activity can circumvent firewall protections and is a common vector for the spread of malware.

P2P is inherently dynamic. It can operate by connecting to numerous destination IP addresses, and it can also use dynamic port numbering. Shared files are often infected with malware, and threat actors can position their malware on P2P clients for distribution to other users.

 

Processor sharing P2P networks donate processor cycles to distributed computational tasks. Cancer research, searching for extraterrestrials, and scientific research use donated processor cycles to distribute computational tasks.

Instant messaging (IM) is also considered to be a P2P application. IM has legitimate value within organizations that have geographically distributed project teams. In this case, specialized IM applications are available, such as the Webex Teams platform, which is more secure than IM uses public servers.

 

The figure shows three phones connected to each other. One of the cell phones connects to two laptops and a p c. The pc also connects to the laptops and another laptop as well as a cell phone. One of the laptops connects to three other laptops. One of those laptops connects to a cell phone and another p c. Bottom words: unstructured P 2 P logical connections through which file sharing and other services may occur.

P2P

Tor is a software platform and network of P2P hosts that function as internet routers on the Tor network. The Tor network allows users to browse the internet anonymously. Users access the Tor network by using a special browser.

 

When a browsing session is begun, the browser constructs a layered end-to-end path across the Tor server network that is encrypted, as shown in the figure. Each encrypted layer is “peeled away” like the layers of an onion (hence “onion routing”) as the traffic traverses a Tor relay.

 

The layers contain encrypted next-hop information that can only be read by the router that needs to read the information. In this way, no single device knows the entire path to the destination, and routing information is readable only by the device that requires it. Finally, at the end of the Tor path, the traffic reaches its internet destination. When traffic is returned to the source, an encrypted layered path is again constructed.

 

Tor presents a number of challenges to cybersecurity analysts. First, Tor is widely used by criminal organizations on the “dark net.” In addition, Tor has been used as a communications channel for malware CnC. Because the destination IP address of Tor traffic is obfuscated by encryption, with only the next-hop Tor node known, Tor traffic avoids blacklists that have been configured on security devices.

 

The figure shows a p c with a textbox: User’s Tor software constructs a random path through the network of Tor relays. Purple arrows indicate encrypted packet contents. To the right of the PC is a cloud of p c’s (four rows of four p c’s to each row). Under the cloud words: internet accessible computers.

 

To the right of the cloud is a server with the words: traffic unencrypted from Tor exit node to destination anywhere on the internet. Some of the p c’s have a T for Tor Relay on the screen. In row 1, p c 1, 2, and 4 have the T, row 2 p c 3 has the t, row 3 p c 1 and 3 have the t, and row 4 pc 1, 2, and 4 have the t. A purple arrow goes from the pc to row 1 p c 1. A purple arrow goes from this p c to row 2 p c 3; another purple arrow down to row 3 p c 3; another purple arrow goes down to row 4 p c 4.

 

Load Balancing

Load balancing involves the distribution of traffic between devices or network paths to prevent overwhelming network resources with too much traffic. If redundant resources exist, a load balancing algorithm or device will work to distribute traffic between those resources, as shown in the figure.

 

One way this is done on the internet is through various techniques that use DNS to send traffic to resources that have the same domain name but multiple IP addresses. In some cases, the distribution may be to servers that are distributed geographically.

 

This can result in a single internet transaction being represented by multiple IP addresses on the incoming packets. This may cause suspicious features to appear in packet captures.

 

In addition, some load balancing manager (LBM) devices use probes to test for the performance of different paths and the health of different devices. For example, an LBM may send probes to the different servers that it is load balancing traffic to in order to detect that the servers are operating.

 

This is done to avoid sending traffic to a resource that is not available. These probes can appear to be suspicious traffic if the cybersecurity analyst is not aware that this traffic is part of the operation of the LBM.

 

The figure shows a PC on the right with 1. user wants to visit www.example.com. d n s query sent. An arrow labelled d n s query www.example.com points to a server labelled ns.locallsp.com. Under the server: 2. local d n s server lacks record for example.com, queries other servers.

 

There are two servers to the right of the server and an arrow pointing to each of them. The server in the top right has 3. request reaches authoritative d n s server for a domain. N S record delegates request to load balancer at www.example.com.

 

An arrow goes back to the ns.locallsp.com server. The other server to the right is labelled loadBalance.example.com and 4. load balancer returns ip address for the server in the server pool depending on load. An arrow also goes back to the ns.locallsp.com server. Another arrow goes from the ns.locallsp.com server to the client and that arrow has words: IP address of www.example.com load-balanced server.

 

7 Types Of Security Data In Cybersecurity

Alert data consists of messages generated by intrusion prevention systems (IPSs) or intrusion detection systems (IDSs) in response to traffic that violates a rule or matches the signature of a known exploit. A network IDS (NIDS), such as Snort, comes configured with rules for known exploits.
Alerts are generated by Snort and are made readable and searchable by the Sguil and Squert applications, which are part of the Security Onion suite of NSM tools. In this article, I will be talking about security data in cybersecurity.

 

A testing site that is used to determine if Snort is operating is the tesmyids site. Search for it on the internet. It consists of a single webpage that displays only the following text uid=0(root) gid=0(root) groups=0(root). If Snort is operating correctly and a host visits this site, a signature will be matched and an alert will be triggered. This is an easy and harmless way to verify that the NIDS is running.
The Snort rule that is triggered is:
alert ip any any -> any any (msg:"GPL ATTACK\_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; fast\_pattern:only; classtype:bad-unknown; sid:2100498; rev:8;)

This rule generates an alert if any IP address in the network receives data from an external source that contains content with text matching the pattern of uid=0(root). The alert contains the message GPL ATTACK_RESPONSE id check returned root. The ID of the Snort rule that was triggered is 2100498.
The highlighted line in the figure displays a Sguil alert that was generated by visiting the testmyids website. The Snort rule and the packet data for the content received from the testmyvids webpage is displayed in the lower right-hand area of the Sguil interface.

Sguil Console Showing Test Alert from Snort IDS

Session and Transaction Data

Session data is a record of a conversation between two network endpoints, which are often a client and a server. The server could be inside the enterprise network or at a location accessed over the internet. Session data is data about the session, not the data retrieved and used by the client.
Session data will include identifying information such as the five tuples of source and destination IP addresses, source and destination port numbers, and the IP code for the protocol in use. Data about the session typically includes a session ID, the amount of data transferred by source and destination, and information related to the duration of the session.
Zeek, formerly Bro, is a network security monitoring tool you will use in labs later in the course. The figure shows a partial output for three HTTP sessions from a Zeek connection log. Explanations of the fields are shown below the figure.

Zeek Session Data – Partial Contents

Transaction data consists of the messages that are exchanged during network sessions. These transactions can be viewed in packet capture transcripts. Device logs kept by servers also contain information about the transactions that occur between clients and servers. For example, a session might include the downloading of content from a web server, as shown in the figure.
The transactions that represent the requests and replies would be logged in an access log on the server or by a NIDS like Zeek. The session is all traffic involved in making up the request, the transaction is the request itself.

Transaction Data

Full Packet Captures

Full packet captures are the most detailed network data that is generally collected. Because of the amount of detail, they are also the most storage and retrieval intensive types of data used in NSM. Full packet captures contain not only data about network conversations, like session data.
Full packet captures also contain the actual contents of the conversations. Full packet captures contain the text of email messages, the HTML in webpages, and the files that enter or leave the network. Extracted content can be recovered from full packet captures and analyzed for malware or user behaviour that violates business and security policies.
The familiar tool Wireshark is very popular for viewing full packet captures and accessing the data associated with network conversations.
The figure illustrates the interface for the Network Analysis Monitor component of the Cisco Prime Infrastructure system, which, like Wireshark, can display full packet captures.

Cisco Prime Network Analysis Module – Full Packet Capture

Statistical Data

Like session data, statistical data is about network traffic. Statistical data is created through the analysis of other forms of network data. Conclusions can be made that describe or predict network behaviour from this analysis. Statistical characteristics of normal network behaviour can be compared to current network traffic in an effort to detect anomalies.

 

Statistics can be used to characterize normal amounts of variation in network traffic patterns in order to identify network conditions that are significantly outside of those ranges. Statistically, significant differences should raise alarms and prompt investigation.

 

Network Behavior Analysis (NBA) and Network Behavior Anomaly Detection (NBAD) are approaches to network security monitoring that use advanced analytical techniques to analyze NetFlow or Internet Protocol Flow Information Export (IPFIX) network telemetry data. Techniques such as predictive analytics and artificial intelligence perform advanced analyses of detailed session data to detect potential security incidents.

 

Note: IPFIX is the IETF standard version of Cisco NetFlow version 9.
An example of an NSM tool that utilizes statistical analysis is Cisco Cognitive Threat Analytics. It is able to find a malicious activity that has bypassed security controls or entered the network through unmonitored channels (including removable media) and is operating inside an organization’s environment.

 

Cognitive Threat Analytics is a cloud-based product that uses machine learning and statistical modelling of networks. It creates a baseline of the traffic in a network and identifies anomalies. It analyzes user and device behaviour, and web traffic, to discover command-and-control communications, data exfiltration, and potentially unwanted applications operating in the infrastructure. The figure illustrates an architecture for Cisco Cognitive Threat Analytics.

 

The figure shows three internal users with each icon having an arrow pointing to the behavioural analysis icon. Another set of three arrows goes from the behavioural analysis icon to the potential threat icon to the right. Below behavioural analysis are two more icons: anomaly detection and machine learning. An arrow goes from behavioural analysis to anomaly detection, from anomaly detection to machine learning, and from machine learning pointing to behavioural analysis.

 

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNAIJA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

      
Fact Check Policy

End Device Logs In Cybersecurity: The Various Types

As previously discussed, host-based intrusion detection systems (HIDS) run on individual hosts. HIDS not only detects intrusions but in the form of host-based firewalls, which can also prevent intrusion. This software creates logs and stores them on the host. This can make it difficult to get a view of what is happening on hosts in the enterprise, so many host-based protections have a way to submit logs to centralized log management servers. In this way, the logs can be searched from a central location using NSM tools.

 

HIDS systems can use agents to submit logs to management servers. OSSEC, a popular open-source HIDS, includes a robust log collection and analysis functionality. Search OSSEC on the internet to learn more. Microsoft Windows includes several methods for automated host log collection and analysis. Tripwire offers a HIDS for Linux that includes similar functionality. All can scale to larger enterprises.
Microsoft Windows host logs are visible locally through Event Viewer. Event Viewer keeps four types of logs:

 

  • Application logs – These contain events logged by various applications.
  • System logs – These include events regarding the operation of drivers, processes, and hardware.
  • Setup logs – These record information about the installation of software, including Windows updates.
  • Security logs – These record events related to security, such as logon attempts and operations related to file or object management and access.
  • Command-line logs – Attackers who have gained access to a system, and some types of malware, execute commands from the command-line interface (CLI) rather than a GUI. Logging command line execution will provide visibility into this type of incident.

Various logs can have different event types. Security logs consist only of audit success or failure messages. On Windows computers, security logging is carried out by the Local Security Authority Subsystem Service (LSASS), which is also responsible for enforcing security policies on a Windows host. LSASS runs as lsass.exe. It is frequently faked by malware.

It should be running from the Windows System32 directory. If a file with this name, or a camouflaged name, such as 1sass.exe, is running or running from another directory, it could be malware.

Windows Events are identified by ID numbers and brief descriptions. An encyclopedia of security event IDs, some with additional details, is available from Ultimate Windows Security on the web.

The table explains the meaning of the five Windows host log event types.

Event Type Description
Error An error is an event that indicates a significant problem such as loss of data or loss of functionality. For example, if a service fails to load during startup, an error event is logged.
Warning A Warning is an event that is not necessarily significant but may indicate a possible future problem. For example, when disk space is low, a warning event is logged. If an application can recover from an event without loss of functionality or data, it can generally classify the event as a warning event.
Information An information event describes the successful operation of an application, driver, or service. For example, when a network driver loads successfully, it may be appropriate to log an information event. Note that it is generally inappropriate for a desktop application to log an event each time it starts.
Success Audit A success audit is an event that records an audited security access attempt that is successful. For example, a user’s successful attempt to log on to the system is logged as a success audit event.
Failure Audit A failure audit is an event that records an audited security access attempt that fails. For example, if a user tries to access a network drive and fails, the attempt is logged as a failure audit event.

 

Syslog

Syslog includes specifications for message formats, a client-server application structure, and network protocol. Many different types of network devices can be configured to use the Syslog standard to log events to centralized syslog servers.
Syslog is a client/server protocol. Syslog was defined within the Syslog working group of the IETF (RFC 5424) and is supported by a wide variety of devices and receivers across multiple platforms.
The Syslog sender sends a small (less than 1KB) text message to the Syslog receiver. The Syslog receiver is commonly called “syslogd,” “Syslog daemon,” or “Syslog server.” Syslog messages can be sent via UDP (port 514) and/or TCP (typically, port 5000). While there are some exceptions, such as SSL wrappers, this data is typically sent in plaintext over the network.

The full format of a Syslog message that is seen on the network has three distinct parts, as shown in the figure.

  • PRI (priority)
  • HEADER
  • MSG (message text)

The PRI consists of two elements, the Facility and Severity of the message, which are both integer values. The Facility consists of broad categories of sources that generated the message, such as the system, process, or application. The Facility value can be used by logging servers to direct the message to the appropriate log file. The Severity is a value from 0-7 that defines the severity of the message.

 

The figure shows three main sections, the p r i section of 8 bits that has the words severity, facility over it, header with the words timestamp, hostname over it, and msg. At the bottom is 1024 bytes.
Note: Facility codes between 15 and 23 (local0-local7) are not assigned a keyword or name. They can be assigned to different meanings depending on the user context. Also, various operating systems have been found to utilize both Facilities 9 and 15 for clock messages.
The HEADER section of the message contains the timestamp in MMM DD HH:MM:SS format. If the timestamp is preceded by the period (.) or asterisk (*) symbols, a problem is indicated with NTP. The HEADER section also includes the hostname or IP address of the device that is the source of the message.
The MSG portion contains the meaning of the Syslog message. This can vary between device manufacturers and can be customized. Therefore, this portion of the message is the most meaningful and useful to the cybersecurity analyst.

Server Logs

Server logs are an essential source of data for network security monitoring. Network application servers such as email and web servers keep access and error logs. DNS proxy server logs which document all the DNS queries and responses that occur on the network are especially important.
DNS proxy logs are useful for identifying hosts that may have visited dangerous websites and for identifying DNS data exfiltration and connections to malware command-and-control servers. Many UNIX and Linux servers use Syslog. Others may use proprietary logging. The contents of log file events depend on the type of server.
Two important log files to be familiar with are the Apache webserver access logs and Microsoft Internet Information Server (IIS) access logs. Examples of each are shown below.

Apache Access Log

203.0.113.127 – dsmith [10/Oct/2016:10:26:57 - 0500] "GET /logo_sm.gif HTTP/1.0" 200 2254 "http://www.example.com/links.html" "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"

IIS Access Log

6/14/2016, 16:22:43, 203.0.113.24, -, W3SVC2, WEB3, 198.51.100.10, 80, GET, /home.htm, -, 200, 0, 15321, 159, 15, HTTP/1.1, Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0), -, http://www.example.com

SIEM and Log Collection

Security Information and Event Management (SIEM) technology is used in many organizations to provide real-time reporting and long-term analysis of security events, as shown in the figure.
The figure shows the S I E M circle in the middle with text boxes around the top of it and each textbox has an arrow pointing to s i e m. Textboxes: threat intelligence, asset management, log storage; net flow telemetry; full packet captures; antimalware devices; i d s / i p s; firewalls; h i d s; server logs and Syslog. Four arrows come out from s i e m circle and each arrow has a textbox: compliance reporting, dashboards and reports, alerts and automation, and incident management system.
SIEM combines the essential functions of security event management (SEM) and security information management (SIM) tools to provide a comprehensive view of the enterprise network using the following functions:

 

  • Log collection – Event records from sources throughout the organization provide important forensic information and help to address compliance reporting requirements.
  • Normalization – This maps log messages from different systems into a common data model, enabling the organization to connect and analyze related events, even if they are initially logged in different source formats.
  • Correlation – This links logs and events from disparate systems or applications, speeding detection of and reaction to security threats.
  • Aggregation – This reduces the volume of event data by consolidating duplicate event records.
  • Reporting – This presents the correlated, aggregated event data in real-time monitoring and long-term summaries, including graphical interactive dashboards.
  • Compliance – This is reporting to satisfy the requirements of various compliance regulations.

A popular SIEM is Splunk, which is made by a Cisco partner. The figure shows a Splunk Threat Dashboard. Splunk is widely used in SOCs. Another popular SIEM solution is Security Onion with ELK, which consists of the integrated Elasticsearch, Logstash, and Kibana applications. Security Onion includes other open-source network security monitoring tools.

Splunk Threat Dashboard

As we know, security orchestration, automation, and response (SOAR) takes SIEM and goes beyond into automating security response workflows and facilitating incidence response. Because of the importance of network security, numerous companies have brought excellent products to the security tools market.

 

However, these tools lack compatibility and require monitoring multiple independent product dashboards in order to process the many alerts that they generate. Because of the lack of cybersecurity professionals to monitor and analyze the large volume of security data, it is important that tools from multiple vendors can be integrated into a single platform.

 

Integrated security platforms go beyond SIEM and SOAR to unify multiple security technologies, processes, and people into a unified team whose components build on rather than impede each other. Security platforms such as Cisco SecureX, Fortinet Security Fabric, and Paloalto Networks Cortex XDR promise to address network security monitoring complexity by integrating multiple functions and data sources into a single platform that will greatly enhance alert accuracy while offering a robust defence.

 

Network Logs In Cybersecurity: Facts To Note


The tcpdump command-line tool is a very popular packet analyzer. It can display packet captures in real-time or write packet captures to a file. It captures detailed packet protocol and content data. Wireshark is a GUI built on tcpdump functionality.
The structure of tcpdump captures varies depending on the protocol captured and the fields requested.
 

NetFlow

NetFlow is a protocol that was developed by Cisco as a tool for network troubleshooting and session-based accounting. NetFlow efficiently provides an important set of services for IP applications, including network traffic accounting, usage-based network billing, network planning, security, Denial-of-Service monitoring capabilities, and network monitoring. NetFlow provides valuable information about network users and applications, peak usage times, and traffic routing.

 

NetFlow does not do a full packet capture or capture the actual content in the packet. NetFlow records information about the packet flow including metadata. Cisco developed NetFlow and then allowed it to be used as a basis for an IETF standard called IPFIX. IPFIX is based on Cisco NetFlow Version 9.

 

NetFlow information can be viewed with tools such as the nfdump. Similar to tcpdump, nfdump provides a command-line utility for viewing NetFlow data from the nfcapd capture daemon, or collector. Tools exist that add GUI functionality to viewing flows. The figure shows a screen from the open-source FlowViewer tool.

 

FlowViewer NetFlow Session Data Dashboard

Traditionally, an IP Flow is based on a set of 5 to 7 IP packet attributes flowing in a single direction. A flow consists of all packets transmitted until the TCP conversation terminates. IP Packet attributes used by NetFlow are:

  • IP source address
  • IP destination address
  • Source port
  • Destination port
  • Layer 3 protocol type
  • Class of Service
  • Router or switch interface

All packets with the same source/destination IP address, source/destination ports, protocol interface and class of service are grouped into a flow, and then packets and bytes are tallied. This methodology of fingerprinting or determining a flow is scalable because a large amount of network information is condensed into a database of NetFlow information called the NetFlow cache.

 

 
All NetFlow flow records will contain the first five items in the list above, and flow start and end timestamps. The additional information that may appear is highly variable and can be configured on the NetFlow Exporter device.

 

 

Exporters are devices that can be configured to create flow records and transmit those flow records for storage on a NetFlow collector device. An example of a basic NetFlow flow record, in two different formats, is shown in the figure.

 

Simple NetFlow v5 Records

Date     flow start      Duration  Proto Src IP Addr:Port     Dst IP Addr:Port  Flags Tos Packets Bytes Flows2017-08-30 00:09:12.596  00.010    TCP   10.1.1.2:80      -> 13.1.1.2:8974     .AP.SF  0   62      3512   1
Traffic Contribution: 8% (3/37)Flow information:IPV4 SOURCE ADDRESS:10.1.1.2IPV4 DESTINATION ADDRESS:13.1.1.2INTERFACE INPUT:Se0/0/1TRNS SOURCE PORT:8974TRNS DESTINATION PORT:80IP TOS:0x00IP PROTOCOL:6FLOW SAMPLER ID:0FLOW DIRECTION:Inputipv4 source mask:/0ipv4 destination mask:/8counter bytes:205ipv4 next hop address:13.1.1.2tcp flags:0x1binterface output:Fa0/0counter packets:5timestamp first:00:09:12.596timestamp last:00:09:12.606ip source as:0ip destination as:0
A large number of attributes for a flow are available. The IANA registry of IPFIX entities lists several hundred, with the first 128 being the most common.
Although NetFlow was not initially conceived as a tool for network security monitoring, it is seen as a useful tool in the analysis of network security incidents. It can be used to construct a timeline of compromise, understand individual host behaviour, or track the movement of an attacker or exploit from host to host within a network. The Cisco/Lancope Stealthwatch technology enhances the use of NetFlow data for NSM.
 

Application Visibility and Control

The Cisco Application Visibility and Control (AVC) system, which is shown in the figure, combines multiple technologies to recognize, analyze, and control over 1000 applications. These include voice and video, email, file sharing, gaming, peer-to-peer (P2P), and cloud-based applications.
AVC uses Cisco next-generation network-based application recognition version 2 (NBAR2), also known as Next-Generation NBAR, to discover and classify the applications in use on the network. The NBAR2 application recognition engine supports over 1000 network applications.
To truly understand the importance of this technology, consider the figure. Identification of network applications by port provides very little granularity and visibility into user behaviour. However, application visibility through the identification of application signatures identifies what users are doing, whether it be teleconferencing or downloading movies to their phones.
The figure has 4 columns. The leftmost column has a router with a magnifying glass on top of it: Application recognition identify applications using L3 to L7 data 1000+ applications: cloud services, Cisco Web ex, Youtube, Skype, P 2 P. In italics is N bar 2. The next column is a graphic of charts, metrics collection, collect metrics for export to management tool: bandwidth usage, response time, latency, packet loss, jitter, P 2 P, and the following in italics: net flow 9, flexible net flow, and IP fix.
The third column has a router with the words management and reporting tools: management and reporting provide the network, collect data, and report on applications performance: report generation and policy management, and in italics Cisco Prime and other 3rd party software. The last column has a router with a red light beside it, high: VOIP, medium browsing, low streaming, and blocked p 2 p. 

Cisco Application Visibility and Control

A management and reporting system, such as Cisco Prime, analyzes and presents the application analysis data into dashboard reports for use by network monitoring personnel. Application usage can also be controlled through the quality of service classification and policies based on the AVC information.
The figure shows on the left port monitoring with applications down the side of unknown, h t t p, h t t p s, ICA, sip, d n s, CIFS, hrsp, ICMP, l d a p, MNSP, and s a p. Horizontal bars go out from each app with the longest bar up top beside unknown and the next largest bar beside h t t p and these have a dotted box around them.
In the application monitoring section, apps are listed on the left with a horizontal bar beside each one. The longest horizontal bar is with the first app listed with each horizontal bar that follows being smaller in size. Apples: BitTorrent, net flix, share point, gtalk v o i p, google docs, RTP, Citrix, s s l, s i p, skype, web ex meeting, h t t p s, flash video, d n s, and Facebook.

Port Monitoring vs. Application Monitoring

Content Filter Logs

Devices that provide content filterings, such as the Cisco Email Security Appliance (ESA) and the Cisco Web Security Appliance (WSA), provide a wide range of functionalities for security monitoring. Logging is available for many of these functionalities.
The ESA, for example, has more than 30 logs that can be used to monitor most aspects of email delivery, system functioning, antivirus, antispam operations, and blacklist and whitelist decisions. Most of the logs are stored in text files and can be collected on Syslog servers, or can be pushed to FTP or SCP servers. In addition, alerts regarding the functioning of the appliance itself and it’s subsystems can be monitored by email to administrators who are responsible for monitoring and operating the device.
WSA devices offer a similar depth of functioning. WSA effectively acts as a web proxy, meaning that it logs all inbound and outbound transaction information for HTTP traffic. These logs can be quite detailed and are customizable. They can be configured in a W3C compatibility format. The WSA can be configured to submit the logs to a server in various ways, including Syslog, FTP, and SCP.
Other logs that are available to the WSA include ACL decision logs, malware scan logs, and web reputation filtering logs.
The figure illustrates the “drill-down” dashboards available from Cisco content filtering devices. By clicking components of the Overview reports, more relevant details are displayed. Target searches provide the most focused information.
The figure on the left shows windows that have charts with vertical bars, charts with horizontal bars, and charts with icons and data. In the middle are the detailed reports with two charts up top with horizontal bars shown followed by a table at the bottom with rows and columns. On the right is the targeted search with blank textboxes available.
 

Logging from Cisco Devices

Cisco security devices can be configured to submit events and alerts to security management platforms using SNMP or Syslog. The figure illustrates a syslog message generated by a Cisco ASA device and a Syslog message generated by a Cisco IOS device.
The figure shows Cisco as a device. The line starts with an asterisk and has the words n t p status pointing down to the letter m of Mar. The timestamp is Mar19 11:22:07.289 EDT: % than as a which is the Cisco facility – 3 which is the severity – 201008 which is the message-id followed by the message text: disallowing new connections. The figure also shows Cisco I O S device and a line that starts with *Sep 16 08:50:47.359 EDT: % and S Y S for the cisco facility – 5 for the severity – CONFIG_I for the mnemonic followed by Configured from console by con0.

Cisco Syslog Message Formats

Note that there are two meanings used for the term facility in Cisco Syslog messages. The first is the standard set of Facility values that were established by the Syslog standards. These values are used in the PRI message part of the Syslog packet to calculate the message priority. Cisco uses some of the values between 15 and 23 to identify Cisco log Facilities, depending on the platform. For example, Cisco ASA devices use Syslog Facility 20 by default, which corresponds to local4. The other Facility value is assigned by Cisco and occurs in the MSG part of the Syslog message.

 

Cisco devices may use slightly different Syslog message formats and may use mnemonics instead of message IDs, as shown in the figure. A dictionary of Cisco ASA Syslog messages is available on the Cisco website.

Proxy Logs

Proxy servers, such as those used for web and DNS requests, contain valuable logs that are a primary source of data for network security monitoring.
Proxy servers are devices that act as intermediaries for network clients. For example, an enterprise may configure a web proxy to handle web requests on the behalf of clients. Instead of requests for web resources being sent directly to the server from the client, the request is sent to a proxy server first. The proxy server requests the resources and returns them to the client. The proxy server generates logs of all requests and responses.
These logs can then be analyzed to determine which hosts are making the requests, whether the destinations are safe or potentially malicious, and to also gain insights into the kind of resources that have been downloaded.
Web proxies provide data that helps determine whether responses from the web were generated in response to legitimate requests or have been manipulated to appear to be responses but are in fact exploits. It is also possible to use web proxies to inspect outgoing traffic as means of data loss prevention (DLP). DLP involves scanning outgoing traffic to detect whether the data that is leaving the web contains sensitive, confidential, or secret information. Examples of popular web proxies are Squid, CCProxy, Apache Traffic Server, and WinGate.
An example of a Squid web proxy log in the Squid-native format appears below. Explanations of the field values appear in the table below the log entry.

DNS Proxy Log Example

1265939281.764     19478 172.16.167.228 TCP_MISS/200 864 GEThttp://www.example.com//images/home.png - NONE/- image/png
Proxy Log Value Explanation
1265939281.764 Time -in Unix epoch timestamp format with milliseconds
19478 Duration – the elapsed time for the request and response from Squid
172.16.167.228 Client IP address
TCP_MISS/200 Result – Squid result codes and HTTP status code separated by a slash
864 Size – the bytes of data delivered
GET Request – HTTP request made by the client
http://www.example.com//images/home.png URI/URL – address of the resource that was requested
Client identity -RFC 1413 value for the client that made the request. Not used by default.
NONE/- Peering code/Peer host – neighbor cache server consulted
image/png Type – MIME content type from the Content-Type value in the HTTP response header
Note: Open web proxies, which are proxies that are available to any internet user, can be used to obfuscate threat actor IP addresses. Open proxy addresses may be used in blacklisting internet traffic.
Cisco Umbrella
Cisco Umbrella, formerly OpenDNS, offers a hosted DNS service that extends the capability of DNS to include security enhancements. Rather than organizations hosting and maintaining blacklisting, phishing protection, and other DNS-related security, Cisco Umbrella provides these protections in its own DNS service.
Cisco Umbrella is able to apply many more resources to managing DNS than most organizations can afford. Cisco Umbrella functions in part as a DNS super proxy in this regard. The Cisco Umbrella suite of security products applies real-time threat intelligence to managing DNS access and the security of DNS records.
DNS access logs are available from Cisco Umbrella for the subscribed enterprise. Instead of using local or ISP DNS servers, an organization can choose to subscribe to Cisco Umbrella for DNS and other security services. An example of a DNS proxy log appears below. The table explains the meaning of the fields in the log entry.

DNS Proxy Log Example

"2015-01-16 17:48:41","ActiveDirectoryUserName",
"ActiveDirectoryUserName,ADSite,Network",
"10.10.1.100","24.123.132.133","Allowed","1 (A)",
"NOERROR","domain-visited.com.",
"Chat,Photo Sharing,Social Networking,Allow List"
Field Example Explanation
Timestamp 2015-01-16 17:48:41 This is when this request was made in UTC. This is different than the Umbrella dashboard, which converts the time to your specified time zone.
Policy Identity ActiveDirectoryUserName The first identity that matched the request.
Identities ActiveDirectoryUserName,ADSite,Network All identities associated with this request.
Internal Ip 10.10.1.100 The internal IP address that made the request.
External Ip 24.123.132.133 The external IP address that made the request.
Action Allowed Whether the request was allowed or blocked.
QueryType 1 (A) The type of DNS request that was made.
ResponseCode NOERROR The DNS return code for this request.
Domain domain-visited.com. This is the domain that was requested.
Categories Chat, Photo Sharing, Social Networking The security or content categories that the destination matches.

Next-Generation Firewalls

Next-Generation or NextGen Firewall devices extend network security beyond IP addresses and Layer 4 port numbers to the application layer and beyond. NexGen Firewalls are advanced devices that provided much more functionality than previous generations of network security devices.
One of those functionalities is reporting dashboards with interactive features that allow quick point-and-click reports on very specific information without the need for SIEM or other event correlators.
Cisco’s line of NextGen Firewall devices (NGFW) use Firepower Services to consolidate multiple security layers into a single platform.
This helps to contain costs and simplify management. Firepower services include application visibility and control, Firepower Next-Generation IPS (NGIPS), reputation and category-based URL filtering, and Advanced Malware Protection (AMP). Firepower devices allow monitoring network security through a web-enabled GUI called Event Viewer.
Common NGFW events include:

Connection Event – Connection logs contain data about sessions that are detected directly by the NGIPS. Connection events include basic connection properties such as timestamps, source and destination IP addresses, and metadata about why the connection was logged, such as which access control rule logged the event.

  • Intrusion Event – The system examines the packets that traverse the network for malicious activity that could affect the availability, integrity, and confidentiality of a host and its data. When the system identifies a possible intrusion, it generates an intrusion event, which is a record of the date, time, type of exploit, and contextual information about the source of the attack and its target.
  • Host or Endpoint Event – When a host appears on the network it can be detected by the system and details of the device hardware, IP address, and the last known presence on the network can be logged.
  • Network Discovery Event – Network discovery events represent changes that have been detected in the monitored network. These changes are logged in response to network discovery policies that specify the kinds of data to be collected, the network segments to be monitored, and the hardware interfaces of the device that should be used for event collection.
  • Netflow Event -Network discovery can use a number of mechanisms, one of which is to use exported NetFlow flow records to generate new events for hosts and servers.

Use Of Security Onion As A Source Of Alerts

Use Of Security Onion As A Source Of Alerts

 

Security Onion is an open-source suite of Network Security Monitoring (NSM) tools that run on an Ubuntu Linux distribution. Security Onion tools provide three core functions for the cybersecurity analyst: full packet capture and data types, network-based and host-based intrusion detection systems, and alert analyst tools.
Security Onion can be installed as a standalone installation or as a sensor and server platform. Some components of Security Onion are owned and maintained by corporations, such as Cisco and Riverbend Technologies, but are made available as open-source.

 

For more information, and to obtain Security Onion, search the internet for the Security Onion website.
Note: In some resources, you may see Security Onion abbreviated as SO. In this course, we will use Security Onion.

Detection Tools for Collecting Alert Data

Security Onion contains many components. It is an integrated environment that is designed to simplify the deployment of a comprehensive NSM solution. The figure illustrates a simplified view of the way in which some of the components of the Security Onion work together.
The graphic displays a three-level architecture for Security Onion. The bottom level is labelled data. It includes the following elements; pcaps, content data transaction data, session data, host logs, alert data, Syslog data, and metadata. The middle layer is labelled detection. It includes the following elements; CapME, Snort, Bro, OSSEC, and Suricata. The top-level is labelled Analysis. It includes Sguil with Wireshark and ELSA supporting Sguil.

A Security Onion Architecture

This is a web application that allows viewing of pcap transcripts rendered with the tcpflow or Zeek tools. CapME can be accessed from the Enterprise Log Search and Archive (ELSA) tool. CapME provides the cybersecurity analyst with an easy-to-read means of viewing an entire Layer 4 session. CapME acts as a plugin to ELSA and provides access to relevant pcap files that can be opened in Wireshark.

Analysis Tools

Security Onion integrates these various types of data and Intrusion Detection System (IDS) logs into a single platform through the following tools:

  • Sguil – This provides a high-level console for investigating security alerts from a wide variety of sources. Sguil serves as a starting point in the investigation of security alerts. A wide variety of data sources are available to the cybersecurity analyst by pivoting directly from Sguil to other tools.
  • Kibana – Kibana is an interactive dashboard interface to Elasticsearch data. It allows querying of NSM data and provides flexible visualizations of that data. It provides data exploration and machine learning data analysis features. It is possible to pivot from Sguil directly into Kibana to see contextualized displays based on the source and destination IP addresses that are associated with an alert. Search the internet and visit the elastic.co website to learn more about the many features of Kibana.
  • Wireshark – This is a packet capture application that is integrated into the Security Onion suite. It can be opened directly from other tools and will display full packet captures relevant to the analysis.
  • Zeek – This is a network traffic analyzer that serves as a security monitor. Zeek inspects all traffic on a network segment and enables in-depth analysis of that data. Pivoting from Sguil into Zeek provides access to very accurate transaction logs, file content, and customized output.

Note: Other Security Onion tools that are not shown in the figure are beyond the scope of this course. A full description of the Security Onion and its components can be found on the Security Onion website.

[embedyt] https://www.youtube.com/watch?v=Sw6BjTBwrqY[/embedyt]

Alert Generation

Security alerts are notification messages that are generated by NSM tools, systems, and security devices. Alerts can come in many forms depending on the source. For example, Syslog provides support for severity ratings which can be used to alert cybersecurity analysts regarding events that require attention.
In Security Onion, Sguil provides a console that integrates alerts from multiple sources into a timestamped queue. A cybersecurity analyst can work through the security queue investigating, classifying, escalating, or retiring alerts. Instead of using a dedicated workflow management system such as Request Tracker for Incident Response (RTIR), a cybersecurity analyst would use the output of an application like Sguil to orchestrate an NSM investigation.
Alerts will generally include five-tuples information when available, as well as timestamps and information identifying which device or system generated the alert. Recall that the five tuples include the following information for tracking a conversation between a source and destination application:
  • SrcIP – the source IP address for the event.
  • SPort – the source (local) Layer 4 port for the event.
  • DstIP – the destination IP for the event.
  • DPort – the destination Layer 4 port for the event.
  • Pr – the IP protocol number for the event.

Additional information could be whether a permit or deny decision was applied to the traffic, some captured data from the packet payload, or a hash value for a downloaded file, or any of a variety of data.

The figure shows the Sguil application window with the queue of alerts that are waiting to be investigated in the top portion of the interface.

Sguil Window

The fields available for the real-time events are as follows:

  • ST – This is the status of the event. RT means real-time. The event is colour-coded by priority. The priorities are based on the category of the alert. There are four priority levels: very low, low, medium, and high. The colours range from light yellow to red as the priority increases.
  • CNT – This is the count for the number of times this event has been detected for the same source and destination IP address. The system has determined that this set of events is correlated. Rather than reporting each in a potentially long series of correlated events in this window, the event is listed once with the number of times it has been detected in this column. High numbers here can represent a security problem or the need for tuning of the event signatures to limit the number of potentially spurious events that are being reported.
  • Sensor – This is the agent reporting the event. The available sensors and their identifying numbers can be found in the Agent Status tab of the pane which appears below the events window on the left. These numbers are also used in the Alert ID column. From the Agent Status pane, we can see that OSSEC, pcap, and Snort sensors are reporting to Sguil. In addition, we can see the default hostnames for these sensors, which includes the monitoring interface. Note that each monitoring interface has both pcap and Snort data associated with it.
  • Alert ID – This two-part number represents the sensor that has reported the problem and the event number for that sensor. We can see from the figure that the largest number of events that are displayed are from the OSSEC sensor (1). The OSSEC sensor has reported eight sets of correlated events. Of these events, 232 have been reported with event ID 1.24.
  • Date/Time – This is the timestamp for the event. In the case of correlated events, it is the timestamp for the first event.
  • Event Message – This is the identifying text for the event. This is configured in the rule that triggered the alert. The associated rule can be viewed in the right-hand pane, just above the packet data. To display the rule, the Show Rule checkbox must be selected.

Depending on the security technology, alerts can be generated based on rules, signatures, anomalies, or behaviours. No matter how they are generated, the conditions that trigger an alert must be predefined in some manner.

Rules and Alerts

Alerts can come from a number of sources:

  • NIDS – Snort, Zeek, and Suricata
  • HIDS – OSSEC, Wazuh
  • Asset management and monitoring – Passive Asset Detection System (PADS)
  • HTTP, DNS, and TCP transactions – Recorded by Zeek and pcaps
  • Syslog messages – Multiple sources

The information found in the alerts that are displayed in Sguil will differ in message format because they come from different sources.
The Sguil alert in the figure was triggered by a rule that was configured in Snort. It is important for the cybersecurity analyst to be able to interpret what triggered the alert so that the alert can be investigated. For this reason, the cybersecurity analyst should understand the components of Snort rules, which are a major source of alerts in Security Onion.

 

The figure shows two main sections: rule and alert. An arrow goes from the rule section pointing to the alert section. Information in the rule section: enabled show packet data checkbox and show rule checkbox. Text: alert t c p $EXTERNAL_NET any -> $HOME_NET 21 (msg:ET EXPLOIT VSFTPD backdoor user login smiley; flow:established,to_server; content:USER; depth:5; content:|3a 29|; distance:0; classtype:attempted-admin; sid:2013188; rev:4;) /nsm/server_data/securityonion/rules/seconion-eth1-1/downloaded.rules: Line 7159. Alert highlighted text: R T 1 seconion-eth1-1 5.23 2017-06-19 23:51:12 209 dot 165 dot 201 dot 17 40599 209 dot 165 dot 200 dot 235 21 6 ET EXPLOIT VSFTPD backdoor user login smiley.

Sguil Alert and the Associated Rule

Snort Rule Structure

Snort rules consist of two sections, as shown in the figure: the rule header and the rule options. The rule header contains the action, protocol, source and destination IP addresses and netmasks, and the source and destination port information. The rule options section contains alert messages and information on which parts of the packet should be inspected to determine if the rule action should be taken. Rule Location is sometimes added by Sguil.
Rule Location is the path to the file that contains the rule and the line number at which the rule appears so that it can be found and modified, or eliminated if required.
The figure shows text in blue: alert ip any any -> any any, in green: (msg: GPL ATTACK_RESPONSE id check returned root; content: uid=0|28|root|29|; fast_pattern:only; classtype:bad-unknown; sid:2100498; rev:8;), and in purple: /nsm/server_data/securityonion/rules/seconion-eth1-1/downloaded.rules:Line 692.

Snort Rule Structure and Sguil-supplied Information

Component Example (shortened…) Explanation
rule header alert ip any any -> any any Contains the action to be taken, source and destination addresses and port, and the direction of traffic flow
rule options (msg:”GPL ATTACK_RESPONSE ID CHECK RETURNED ROOT”;…) Includes the message to be displayed, details of packet content, alert type, source ID, and additional details, such as a reference for the rule or vulnerability
rule location /nsm/server_data/securityonion/rules/… Added by Sguil to indicate the location of the rule in the Security Onion file structure and in the specified rule file
The Rule Header
The rule header contains the action, protocol, addressing, and port information, as shown in the figure. In addition, the direction of flow that triggered the alert is indicated. The structure of the header portion is consistent with Snort alert rules.
Snort can be configured to use variables to represent internal and external IP addresses.
These variables, $HOME_NET and $EXTERNAL_NET appear in the Snort rules. They simplify the creation of rules by eliminating the need to specify specific addresses and masks for every rule. The values for these variables are configured in the snort.conf file. Snort also allows individual IP addresses, blocks of addresses, or lists of either to be specified in rules. Ranges of ports can be specified by separating the upper and lower values of the range with a colon. Other operators are also available.
The figure shows text in blue: alert ip any any -> any any, then text in normal font: (msg: GPL ATTACK_RESPONSE id check returned root; content: uid=0|28|root|29|; fast_pattern:only; classtype:bad-unknown; sid:2100498; rev:8;) /nsm/server_data/securityonion/rules/seconion-eth1-1/downloaded.rules:Line 692.

Snort Rule Header Structure

Component Explanation
alert the action to be taken is to issue an alert, other actions are log and pass
ip the protocol
any any the specified source is any IP address and any Layer 4 port
-> the direction of flow is from the source to the destination
any any the specified destination is any IP address and any Layer 4 port
The Rule Options
The structure of the options section of the rule is variable. It is the portion of the rule that is enclosed in parenthesis, as shown in the figure. It contains the text message that identifies the alert. It also contains metadata about the alert, such as a URL that provides reference information for the alert.
Other information can be included, such as the type of rule and a unique numeric identifier for the rule and the rule revision. In addition, features of the packet payload may be specified in the options. The Snort users manual, which can be found on the internet, provides details about rules and how to create them.

Snort rule messages may include the source of the rule. Three common sources for Snort rules are:

  • GPL – Older Snort rules that were created by Sourcefire and distributed under a GPLv2. The GPL ruleset is not Cisco Talos certified. It includes Snort SIDs 3464 and below. The GPL ruleset is can be downloaded from the Snort website, and it is included in Security Onion.
  • ET – Snort rules from Emerging Threats. Emerging Threats is a collection point for Snort rules from multiple sources. ET rules are open source under a BSD license. The ET ruleset contains rules from multiple categories. A set of ET rules is included with Security Onion. Emerging Threats is a division of Proofpoint, Inc.
  • VRT – These rules are immediately available to subscribers and are released to registered users 30 days after they were created, with some limitations. They are now created and maintained by Cisco Talos.

Rules can be downloaded automatically from Snort.org using the PulledPork rule management utility that is included with Security Onion.
Alerts that are not generated by Snort rules are identified by the OSSEC or PADS tags, among others. In addition, custom local rules can be created.

The figure shows text in normal font: alert ip any any -> any any, then text in green: (msg: GPL ATTACK_RESPONSE id check returned root; content: uid=0|28|root|29|; fast_pattern:only; classtype:bad-unknown; sid:2100498; rev:8;), then text in normal font: /nsm/server_data/securityonion/rules/seconion-eth1-1/downloaded.rules:Line 692.

Snort Rules Options Structure

Component Explanation
msg: Text that describes the alert.
content: Refers to content of the packet. In this case, an alert will be sent if the literal text “uid=0(root)” appears anywhere in the packet data. Values specifying the location of the text in the data payload can be provided.
reference: This is not shown in the figure. It is often a link to a URL that provides more information on the rule. In this case, the sid is hyperlinked to the source of the rule on the internet.
classtype: A category for the attack. Snort includes a set of default categories that have one of four priority values.
sid: A unique numeric identifier for the rule.
rev: The revision of the rule that is represented by the sid.
 

Lab – Snort and Firewall Rules

Different security appliances and software perform different functions and record different events. As a consequence, the alerts that are generated by different appliances and software will also vary.
In this lab, to get familiar with firewall rules and IDS signatures you will:
  • Perform live monitoring of IDS and events.
  • Configure your own customized firewall rule to stop internal hosts from contacting a malware-hosting server.
  • Craft a malicious packet and launch it against an internal target.
  • Create a customized IDS rule to detect the customized attack and issue an alert based on it.

Action Point
I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

Fact Check Policy

CRMNAIJA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

      
Fact Check Policy

The Need For Alert Evaluation In Cybersecurity

The threat landscape is constantly changing as new vulnerabilities are discovered and new threats evolve. As a user and organizational needs change, so also does the attack surface. Threat actors have learned how to quickly vary the features of their exploits in order to evade detection. This article talks about alert evaluation in cybersecurity.


It is impossible to design measures to prevent all exploits. Exploits will inevitably evade protection measures, no matter how sophisticated they may be. Sometimes, the best that can be done is to detect exploits during or after they have occurred.
Detection rules should be overly conservative. In other words, it is better to have alerts that are sometimes generated by innocent traffic, than it is to have rules that miss malicious traffic. For this reason, it is necessary to have skilled cybersecurity analysts investigate alerts to determine if an exploit has actually occurred.

 

Tier 1 cybersecurity analysts will typically work through queues of alerts in a tool like Sguil, pivoting to tools like Zeek, Wireshark, and Kibana to verify that an alert represents an actual exploit.
The figure shows a Squil textbox up top with a line pointing to each of the three textboxes below it: Kibana, Zeek, and wire shark.

Primary Tools for the Tier 1 Cybersecurity Analyst

Evaluating Alerts

Security incidents are classified using a scheme borrowed from medical diagnostics. This classification scheme is used to guide actions and to evaluate diagnostic procedures. For example, when a patient visits a doctor for a routine examination, one of the doctor’s tasks is to determine whether the patient is sick.
One of the outcomes can be a correct determination that disease is present and the patient is sick. Another outcome can be that there is no disease and the patient is healthy.
The concern is that either diagnosis can be accurate, or true, or inaccurate, or false. For example, the doctor could miss the signs of disease and make the incorrect determination that the patient is well when they are in fact sick. Another possible error is to rule that a patient is sick when that patient is in fact healthy. False diagnoses are either costly or dangerous.

 

In network security analysis, the cybersecurity analyst is presented with an alert. This is similar to a patient going to the doctor and saying, “I am sick.” The cybersecurity analyst, like the doctor, needs to determine if this diagnosis is true. The cybersecurity analyst asks, “The system says that an exploit has occurred. Is this true?”

  • True Positive: The alert has been verified to be an actual security incident.
  • False Positive: The alert does not indicate an actual security incident. Benign activity that results in a false positive is sometimes referred to as a benign trigger.

An alternative situation is that an alert was not generated. The absence of an alert can be classified as:

  • True Negative: No security incident has occurred. The activity is benign.
  • False Negative: An undetected incident has occurred.
When an alert is issued, it will receive one of four possible classifications
True False
Positive (Alert exists) Incident occurred No incident occurred
Negative (No alert exists) No incident occurred Incident occurred
Note: “True” events are desirable. “False” events are undesirable and potentially dangerous.
True positives are the desired type of alert. They mean that the rules that generate alerts have worked correctly.
False positives are not desirable. Although they do not indicate that an undetected exploit has occurred, they are costly because cybersecurity analysts must investigate false alarms; therefore, time is taken away from the investigation of alerts that indicate true exploits.
True negatives are desirable. They indicate that benign normal traffic is correctly ignored, and erroneous alerts are not being issued.
False negatives are dangerous. They indicate that exploits are not being detected by the security systems that are in place. These incidents could go undetected for a long time, and ongoing data loss and damage could result.
Benign events are those that should not trigger alerts. Excess benign events indicate that some rules or other detectors need to be improved or eliminated.
When true positives are suspected, a cybersecurity analyst is sometimes required to escalate the alert to a higher level for investigation. The investigator will move forward with the investigation in order to confirm the incident and identify any potential damage that may have been caused.
This information will be used by more senior security personnel who will work to isolate the damage, address vulnerabilities, mitigate the threat, and deal with reporting requirements.
A cybersecurity analyst may also be responsible for informing security personnel that false positives are occurring to the extent that the cybersecurity analyst’s time is seriously impacted. This situation indicates that security monitoring systems need to be tuned to become more efficient. Legitimate changes in the network configuration or newly downloaded detection rules could result in a sudden spike in false positives as well.
False negatives may be discovered well after an exploit has occurred. This can happen through retrospective security analysis (RSA). RSA can occur when newly obtained rules or other threat intelligence is applied to archived network security data. For this reason, it is important to monitor threat intelligence to learn of new vulnerabilities and exploits and to evaluate the likelihood that the network was vulnerable to them at some time in the past.
In addition, the exploit needs to be evaluated regarding the potential damage that the enterprise could suffer. It may be determined that adding new mitigation techniques is sufficient, or that a more detailed analysis should be conducted.

 

 

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

      
Fact Check Policy

Elastic Data Core Components In Cybersecurity

 

A typical network has a multitude of different logs to keep track of and most of those logs are in different formats. With huge amounts of disparate data, how is it possible to get an overview of network operations while also getting a sense of subtle anomalies or changes in the network? This article talks about all that you need to know about electronic data core components in cybersecurity. 

 

The Elastic Stack attempts to solve this problem by providing a single interface view into a heterogeneous network. The Elastic Stack consists of Elasticsearch, Logstash, and Kibana (ELK). It is a highly scalable and modular framework for ingesting, analyzing, storing and visualizing data.

Elasticsearch is an open-core platform (open source in the core components) for searching and analyzing an organization’s data in near real-time. It can be used in many different contexts but has gained popularity in network security as a SIEM tool. Security Onion includes ELK and other components from Elastic including:

  • Beats – This is a series of software plugins that send different types of data to the Elasticsearch data stores.
  • ElastAlert – This provides queries and security alerts based on user-defined criteria and other information from data in Elasticsearch. Alert notifications can be sent to a console, or email and other notification systems such as TheHive security incident response platform.
  • Curator – This provides actions to manage Elasticsearch data indices.

Elasticsearch, which is the search engine component, uses RESTful web services and APIs, a distributed computing cluster with multiple server nodes, and a distributed NoSQL database made up of JSON documents. Additional functionality can be added through custom-created extensions.

The Elasticsearch company offers a commercial extension called X-Pack which adds security, alerting, monitoring, reporting, and graphs. The company also offers a machine-learning add-on as well as their own Elastic SIEM product.

Logstash enables the collection and normalization of network data into data indexes that can be efficiently searched by Elasticsearch. Logstash and Beats modules are used to ingest data into the Elasticsearch cluster.

Kibana provides a graphical interface to data that is compiled by Elasticsearch. It enables visualization of network data and provides tools and shortcuts for querying that data in order to isolate potential security breaches.

The core open source components of the Elastic Stack are Logstash, Beats, Elasticsearch, and Kibana, as shown in the figure.

 

The figure shows the core components of the Elastic Stack: Kibana which is used to access, visualize and investigate data; Elasticsearch which is used to store, index, and analyze data, and Logstash and Beats which is used to acquire or ingest network data.

Elastic Stack Core Components

Logstash
Logstash is an extract, transform and load the system with the ability to take in various sources of log data and transform or parse the data through translation, sorting, aggregating, splitting, and validation. After transforming the data, the data is loaded into the Elasticsearch database in the proper file format. The figure shows some of the fields that are available in Logstash as shown in the Kibana Management interface.

Kibana Management Frame Showing Logstash Index Details

Beats
Beats agents are open source software clients used to send operational data directly into Elasticsearch or through Logstash. Elastic, as well as the open-source community, actively develop Beats agents, so there are a huge variety of Beats agents for sending data to Elasticsearch in near real-time.
Some of the Beats agents provided by Elastic are Auditbeat for audit data, Metricbeat for metrics data, Heartbeat for availability, Packetbeat for network traffic, Journalbeat for Systemd journals, and Winlogbeat for Windows event logs. Some community-sourced Beats are Amazonbeat, Apachebeat, Dockbeat, Nginxbeat, and Mqttbeat to name a few.
Elasticsearch
Elasticsearch is a cross-platform enterprise search engine written in Java. The core components are open-source with commercial addons called X-packs that give additional functionality. Elasticsearch supports near real-time search using simple REST APIs to create or update JavaScript Object Notation (JSON) documents using HTTP requests. Searches can be made using any program capable of making HTTP requests such as a web browser, Postman, cURL, etc. These APIs can also be accessed by Python or other programming language scripts for automated operations.
The Elasticsearch data structure is called an inverted index, which is designed to allow very fast full-text searches. An index is like a database, it is a namespace for a collection of documents that are related to each other. An index can be partitioned or mapped into different types.
If you compare an Elasticsearch index to a traditional relational database, the index is like the database, the types are like the tables, and the documents are like the columns and rows, as shown in the table.
MySQL Component: database tables columns/rows
Elasticsearch Component: index types documents
Elasticsearch stores data in JSON-formatted documents. A JSON document is organized into hierarchies of key/value pairs, with a key being a name and the corresponding value is either a string, number, Boolean, date, array, or another type of data.
Kibana
Kibana provides an easy to use graphical user interface for managing Elasticsearch. By using a web browser, an analyst can use the Kibana interface to search and view indices.
The management tab allows you to create and manage indices and there types and formats. The discovery tab is a quick and powerful way to view your data and search it using the search tools.
The visualize tab allows you to create custom visualizations like bar charts, line charts, pie charts, heat maps, and more. The visualizations you create can be organized into customized dashboards for monitoring and analyzing your data. A Kibana dashboard is shown in the figure.

A Kibana Dashboard

Data Reduction

The amount of network traffic that is collected by packet captures and the number of log file entries and alerts that are generated by network and security devices can be enormous. Even with recent advances in Big Data, processing, storing, accessing, and archiving NSM-related data is a daunting task.
For this reason, it is important to identify the network data that should be gathered. Not every log file entry, packet, and alert needs to be gathered. By limiting the volume of data, tools like Elasticsearch will be far more useful, as shown in the figure.
Some network traffic has little value to NSM.
Encrypted data, such as IPsec or SSL traffic, is largely unreadable. Some traffic, such as that generated by routing protocols or spanning-tree protocol, is routine and can be excluded. Other broadcast and multicast protocols can usually be eliminated from packet captures, as can traffic from other protocols that generate alot of routine traffic.
In addition, alerts that are generated by a HIDS, such as Windows security auditing or OSSEC, should be evaluated for relevance.
Some are informational or of low potential security impact. These messages can be filtered from NSM data. Similarly, Syslog may store messages of very low severity that could be disregarded to diminish the quantity of NSM data to be handled.
The figure is a simplified representation of how data like PCAPS, logs, and alerts are fed into the Logstash or the Elastic stack and parsed into relevant network security monitoring data.

Data Normalization

Data normalization is the process of combining data from a number of data sources into a common format. Logstash provides a series of transformations that process security data and transform it before adding it to Elasticsearch. Additional plugins can be created to suit the needs of the organization.
A common schema will specify the names and formats for the required data fields. Formatting of the data fields can vary widely between sources. However, if searching is to be effective, the data fields must be consistent.
For example, IPv6 addresses, MAC addresses, and date and time information can be represented in varying formats. Similarly, subnet masks, DNS records, and so on can vary in format between data sources. Logstash transformations accept the data in it’s the native format and make elements of the data consistent across all sources. For example, a single format will be used for addresses and timestamps for data from all sources.

IPv6 Address Formats

  • 2001:db8:acad:1111:2222::33
  • 2001:DB8:ACAD:1111:2222::33
  • 2001:DB8:ACAD:1111:2222:0:0:33
  • 2001:DB8:ACAD:1111:2222:0000:0000:0033

MAC Formats

  • A7:03:DB:7C:91:AA
  • A7-03-DB-7C-91-AA
  • A70.3DB.7C9.1AA

Date Formats

  • Monday, July 24, 2017 7:39:35pm
  • Mon, 24 Jul 2017 19:39:35 +0000
  • 2017-07-24T19:39:35+00:00
  • 1500925254
Data normalization is required to simplify searching for correlated events. If differently formatted values exist in the NSM data for IPv6 addresses, for example, a separate query term would need to be created for every variation in order for correlated events to be returned by the query.

Data Archiving

Everyone would love the security of collecting and saving everything, just in case. However, retaining NSM data indefinitely is not feasible due to storage and access issues. It should be noted that the retention period for certain types of network security information may be specified by compliance frameworks. For example, the Payment Card Industry Security Standards Council (PCI DSS) requires that an audit trail of user activities related to protected information be retained for one year.
Security Onion has different data retention periods for different types of NSM data. For pcaps and raw Bro logs, a value assigned in the securityonion.conf file controls the percentage of disk space that can be used by log files. By default, this value is set to 90%. For Elasticsearch, retention of data indices is controlled by Elasticsearch curator. Curator runs in a Docker container and executes every minute according to cron jobs. Curator logs it’s activity to curator.log. Curator defaults to closing indices older than 30 days. To modify this, change CURATOR_CLOSE_DAYS in /etc/nsm/securityonion.conf. As a disk reaches capacity, Curator deletes old indices to prevent your disk from filling up. To change the limit, modify LOG_SIZE_LIMIT in /etc/nsm/securityonion.conf.
Sguil alert data is retained for 30 days by default. This value is set in the securityonion.conf file.
Security Onion is known to require alot of storage and RAM to run properly. Depending on the size of the network, multiple terabytes of storage may be required. Of course, Security Onion data can always be archived to external storage by a data archive system, depending on the needs and capabilities of the organization.
Note: The storage locations for the different types of Security Onion data will vary based on the Security Onion implementation.
Log entries are generated by network devices, operating systems, applications, and various types of programmable devices. A file containing a time-sequenced stream of log entries is called a log file. By nature, log files record events that are relevant to the source. The syntax and format of data within log messages are often defined by the application developer.
Therefore, the terminology used in the log entries often varies from source to source. For example, depending on the source, the terms login, logon, authentication event, and user connection, may all appear in log entries to describe a successful user authentication to a server.
It is desirable to have consistent and uniform terminology in logs generated by different sources. This is especially true when all log files are being collected by a centralized point. The term normalization refers to the process of converting parts of a message, in this case, a log entry, to a common format.
In this lab, you will use command-line tools to manually normalize log entries. In Part 2, the timestamp field must be normalized. In Part 3, the IPv6 field requires normalization.

 

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNAIJA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

     
Fact Check Policy

 

Using Sguil In Investigating Network Data

 

The primary duty of a cybersecurity analyst is the verification of security alerts. Depending on the organization, the tools used to do this will vary. For example, a ticketing system may be used to manage task assignments and documentation.
In Security Onion, the first place that a cybersecurity analyst will go to verify alerts is Sguil. In this article, I want to look at some of the ways of investigating network data in cybersecurity. Follow me as we are going to look at that in this article. 

 

Sguil automatically correlates similar alerts into a single line and provides a way to view correlated events represented by that line. In order to get a sense of what has been happening in the network, it may be useful to sort on the CNT column to display the alerts with the highest frequency.
Right-clicking the CNT value and selecting View Correlated Events opens a tab that displays all events that are related by Sguil.
This can help the cybersecurity analyst understand the time frame during which the correlated events were received by Sguil. Note that each event receives a unique event ID. Only the first event ID in the series of correlated events is displayed in the RealTime Events tab. The figure shows Sguil alerts sorted on CNT with the View Correlated Events menu open.

Sguil Alerts Sorted on CNT

Sguil Queries

Queries can be constructed in Sguil using the Query Builder. It simplifies constructing queries to a certain degree, but the cybersecurity analyst must know the field names and some issues with field values. For example, Sguil stores IP addresses in an integer representation. In order to query an IP address in dotted-decimal notation, the IP address value must be placed within the INET_ATON() function. Query Builder is opened from the Sguil Query menu. Select Query Event Table to search active events.

The table shows the names of some of the event table fields that can be queried directly. Selecting Show DataBase Tables from the Query menu displays a reference to the field names and types for each of the tables that can be queried. When conducting event table searches, use the pattern event.fieldName = value.

 

Field Name Type Description
sid int the unique ID of the sensor
cid int the sensor’s unique event number
signature varchar the human-readable name of the event (e.g. “WEB-IIS view source via translate header”)
timestamp datetime the date and time the event occurred on the sensor
status int the Sguil classification assigned to this event. Unclassified events are priority 0.
src_ip int the source IP for the event. Use the INET_ATON() function to covert the address to the database’s integer representation.
dst_ip int the destination IP for the event
src_port int the source port of the packet that triggered the event
dst_port int the destination port of the packet that triggered the event
ip_proto ing IP protocol type of the packet. (6 = TCP, 17 = UDP, 1 = ICMP, others are possible)
The figure shows a simple timestamp and IP address query made in the Query Builder window. Note the use of the INET_ATON() function to simplify entering an IP address.
In the example below, the cybersecurity analyst is investigating a source port 40754 that is associated with an Emerging Threats alert. Towards the end of the query, the WHERE event.src_port = ‘40754’ portion was created by the user in Query Builder. The remainder of the query is supplied automatically by Sguil and concerns how the data that is associated with the events is to be retrieved, displayed, and presented.

Pivoting from Sguil

Sguil provides the ability for the cybersecurity analyst to pivot to other information sources and tools. Log files are available in Elasticsearch. Relevant packet captures can be displayed in Wireshark.
Transcripts of TCP sessions and Zeek (Bro) detection information are also available. The menu shown in the figure was opened by right-clicking on an Alert ID. Selecting from this menu will open information about the alert in other tools, which provides rich, contextualized information to the cybersecurity analyst.

Pivoting from Sguil

Additionally, Sguil can provide pivots to Passive Real-time Asset Detection System (PRADS) and Security Analyst Network Connection Profiler (SANCP) information. These tools are accessed by right-clicking on an IP address for an event and selecting the Quick Query or Advanced Query menus.
PRADS gathers network profiling data, including information about the behaviour of assets on the network. PRADS is an event source, like Snort and OSSEC. It can also be queried through Sguil when an alert indicates that an internal host may have been compromised.
Executing a PRADS query out of Sguil can provide information about the services, applications, and payloads that may be relevant to the alert. In addition, PRADS detects when new assets appear on the network.
Note: The Sguil interface refers to PADS instead of PRADS. PADS was the predecessor to PRADS. PRADS is the tool that is actually used in Security Onion. PRADS is also used to populate SANCP tables. In Security Onion, the functionalities of SANCP have been replaced by PRADS, however, the term SANCP is still used in the Sguil interface. PRADS collects the data, and a SANCP agent records the data in a SANCP data table.

 

The SANCP functionalities concern collecting and recording statistical information about network traffic and behaviour. SANCP provides a means of verifying that network connections are valid. This is done through the application of rules that indicate which traffic should be recorded and the information with which the traffic should be tagged.

Event Handling in Sguil

Finally, Sguil is not only a console that facilitates the investigation of alerts. It is also a tool for addressing or classifying alerts. Three tasks can be completed in Sguil to manage alerts. First, alerts that have been found to be false positives can be expired.

This can be done by using the right-clicking in the ST column for the event and using the menu or by pressing the F8 key. An expired event disappears from the queue. Second, if the cybersecurity analyst is uncertain how to handle an event, it can be escalated by pressing the F9 key.

The alert will be moved to the Sguil Escalated Events tab. Finally, an event can be categorized. Categorization is for events that have been identified as true positives.

 

Sguil includes seven pre-built categories that can be assigned by using a menu, which is shown in the figure, or by pressing the corresponding function key. For example, an event would be categorized as Cat I by pressing the F1 key. In addition, criteria can be created that will automatically categorize an event.

Categorized events are assumed to have been handled by the cybersecurity analyst. When an event is categorized, it is removed from the list of RealTime Events. The event remains in the database, however, and it can be accessed by queries that are issued by category.

 

This course covers Sguil at a basic level. Numerous resources exist on the internet for learning more.

 

Event Handling in Sguil

Working in ELK

Logstash and Beats are used for data ingestion in the Elastic Stack. They provide access to large numbers of log file entries. Because the number of logs that can be displayed is so large, Kibana, which is the visual interface into the logs, is configured to show the last 24 hours by default. You can adjust the time range to view broader or older ranges of data.
In order to see log file records for a different period of time, click the Last 24 hours tab in the upper right corner of Kibana. From there, set the Time Range by selecting the Quick tab for predefined time ranges. You can also enter the dates and times manually using the Absolute tab.
The figure shows an Absolute time range from May 17th to May 18th, 2020. Logs are ingested into Elasticsearch into separate indices or databases based on a configured range of time.
The best way to monitor your data in Elasticsearch is to build customized visual dashboards that track the data that you are interested in using. A variety of visual charts including bar graphs, pie charts, count metrics, heat maps, Geo maps, top number lists are available. In Kibana, visualizations and charts can be searched and filtered with specific metrics and buckets of data.

Queries in ELK

Elasticsearch is built on Apache Lucene, an open-source search engine software library that features full-text indexing and searching capabilities. Elasticsearch ingests data into documents called indices and those documents are mapped to various datatypes using index patterns. The index patterns create a data structure of JSON-formatted fields and values. The datatypes in the fields can be in the following formats:

  • Core Datatypes: Text (Strings), Numeric, Date, Boolean, Binary, and Range
  • Complex Datatypes: Object (JSON), Nested (arrays of JSON objects)
  • Geo Datatypes: Geo-point (latitude/longitude), Geo-shape (polygons)
  • Specialized Datatypes: IP addresses, Token count, Histogram, etc.)

Using Lucene software libraries, Elasticsearch has its own query language based on JSON called Query DSL (Domain Specific Language). Query DSL features leaf queries, compound queries, and expensive queries.

Leaf queries look for a specific value in a specific field, such as the match, term, or range queries. Compound queries enclose other leaf or compound queries and are used to combine multiple queries in a logical fashion. Expensive queries execute slowly and include fuzzy matching, regex matching, and wildcard matching.

 

Query Language
Along with JSON, Elasticsearch queries make use of the following elements: Boolean operators, Fields, Ranges, Wildcards, Regex, Fuzzy search, Text search.
  • Boolean Operators – AND, OR, and NOT operators:
    • “php” OR “zip” OR “exe” OR “jar” OR “run”
    • “RST” AND “ACK”
  • Fields – In colon-separated key: value pairs you specify the key field, a colon, a space and the value:
    • dst.ip: “192.168.1.5”
    • dst.port: 80
  • Ranges – You can search for fields within a specific range using square brackets (inclusive) or curly braces (exclusive) range:
    • host:[1 TO 255] — Will return events with age between 1 and 255
    • TTL:{100 TO 400} — Will return events with prices between 101 and 399
    • name: [Admin TO User] — Will return names between and including Admin and User
  • Wildcards – The * character is for multiple character wildcards and the ? character for single-character wildcards:
    • P?ssw?rd — Will match Password, and P@ssw0rd
    • Pas* — Will match Pass, Passwd, and Password
  • Regex – These are placed between forward slashes (/):
    • /d[ao]n/ — Will match both dan and don
    • /<.+>/ — Will match text that resembles an HTML tag
  • Fuzzy Search – Fuzzy searching uses the Damerau-Levenshtein Distance to match terms that are similar in spelling. This is great when your data set has misspelt words. Use the tilde (~) to find similar terms:
    • index.php~ – This may return results like “index.html,” “home.php”, and “info.php.”
    • Use the tilde (~) along with a number to specify how big the distance between words can be:
    • term~2 – This will match, among other things: “team,” “terms,” “trem,” and “torn”
  • Text search – Type in the term or value you want to find. This can be a field, or a string within a field, etc.

 

Query Execution
Elasticsearch was designed to interface with users using web-based clients that follow the HTTP REST framework. Queries can be executed using the following methods:
  • URI – Elasticsearch can execute queries using URI searches:
    • http://localhost:9200/_search?q=query:ns.example.com
  • cURL – Elasticsearch can execute queries using cURL from the command line:
    • curl “localhost:9200/_search?q=query:ns.example.com”
  • JSON – Elasticsearch can execute queries with a request body search using a JSON document beginning with a query element, and a query formatted using the Query Domain Specific Language.
  • Dev Tools – Elasticsearch can execute queries using the Dev Tools console in Kibana and a query formatted using the Query Domain Specific Language.

Note: Advanced Elasticsearch queries are beyond the scope of this course. In the labs, you will be provided with complex query statements, if necessary.

Investigating Process or API Calls

Applications interact with an operating system (OS) through system calls to the OS application programming interface (API), as shown in the figure. These system calls allow access to many aspects of system operation such as:

  • Software process control
  • File management
  • Device management
  • Information management
  • Communication

Malware can also make system calls. If the malware can fool an OS kernel into allowing it to make system calls, many exploits are possible.

HIDS software tracks the operation of a host OS. OSSEC rules detect changes in host-based parameters like the execution of software processes, changes in user privileges, and registry modifications, among many others. OSSEC rules will trigger an alert in Sguil. Pivoting to Kibana on the host IP address allows you to choose the type of alert based on the program that created it. Filtering for OSSEC indices results in a view of the OSSEC events that occurred on the host, including indicators that malware may have interacted with the OS kernel.

 

The figure shows how a user can make a remote system call, using an application, to access a remote operating system’s API to access information regarding the computer’s files, processes, network status and configuration, I/O, and devices.

Action Point
I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNAIJA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

     
Fact Check Policy

Digital Forensics In Cybersecurity: Facts To Note

 

 

Now that you have investigated and identified valid alerts, what do you do with the evidence? The cybersecurity analyst will inevitably uncover evidence of criminal activity. In order to protect the organization and to prevent cybercrime, it is necessary to identify threat actors, report them to the appropriate authorities, and provide evidence to support prosecution.

 

Tier 1 cybersecurity analysts are often the first to uncover wrongdoing. Cybersecurity analysts must know how to properly handle evidence and attribute it to threat actors. In this article, we will be talking about some of the facts that you need to know about Digital Forensics in cybersecurity.

 

Digital forensics is the recovery and investigation of information found on digital devices as it relates to criminal activity. Indicators of compromise are the evidence that a cybersecurity incident has occurred.

 

This information could be data on storage devices, in volatile computer memory, or the traces of cybercrime that are preserved in network data, such as pcaps and logs. It is essential that all indicators of compromise be preserved for future analysis and attack attribution.

 

Cybercriminal activity can be broadly characterized as originating from inside of or outside of the organization. Private investigations are concerned with individuals inside the organization.

 

 

These individuals could simply be behaving in ways that violate user agreements or other non-criminal conduct. When individuals are suspected of involvement in criminal activity involving the theft or destruction of intellectual property, an organization may choose to involve law enforcement authorities, in which case the investigation becomes public.

 

 

Internal users could also have used the organization’s network to conduct other criminal activities that are unrelated to the organizational mission but are in violation of various legal statutes.

 

In this case, public officials will carry out the investigation.

 

When an external attacker has exploited a network and stolen or altered data, evidence needs to be gathered to document the scope of the exploit. Various regulatory bodies specify a range of actions that an organization must take when various types of data have been compromised. The results of forensic investigation can help to identify the actions that need to be taken.

 

For example, under the US HIPAA regulations, if a data breach has occurred that involves patient information, notification of the breach must be made to the affected individuals. If the breach involves more than 500 individuals in a state or jurisdiction, the media, as well as the affected individuals, must be notified.

 

A digital forensic investigation must be used to determine which individuals were affected, and to certify the number of affected individuals so that appropriate notification can be made in compliance with HIPAA regulations.

 

 

It is possible that the organization itself could be the subject of an investigation. Cybersecurity analysts may find themselves in direct contact with digital forensic evidence that details the conduct of members of the organization.

 

Analysts must know the requirements regarding the preservation and handling of such evidence. Failure to do so could result in criminal penalties for the organization and even the cybersecurity analyst if the intention to destroy evidence is established.

The Digital Forensics Process

It is important that an organization develop well-documented processes and procedures for digital forensic analysis. Regulatory compliance may require this documentation, and this documentation may be inspected by authorities in the event of a public investigation.
NIST Special Publication 800-86 Guide to Integrating Forensic Techniques into Incident Response is a valuable resource for organizations that require guidance in developing digital forensics plans.For example, it recommends that forensics be performed using the four-phase process.
The following describes the four basic phases of the digital evidence forensic process.
This image depicts the Digital Evidence Forensic Process in a progress bar moving from right to left.
The four steps are Collection, Examination, Analysis, and Reporting. Above the steps are listed the inputs or outputs for each step. Media is collected, and the examination results in data, Analysis yields information, and evidence is reported.

The Digital Evidence Forensic Process

Types of Evidence

In legal proceedings, evidence is broadly classified as either direct or indirect. Direct evidence is evidence that was indisputably in the possession of the accused or is eyewitness evidence from someone who directly observed criminal behaviour.
Evidence is further classified as:
  • Best evidence – This is evidence that is in its original state. This evidence could be storage devices used by an accused, or archives of files that can be proven to be unaltered.
  • Corroborating evidence – This is evidence that supports an assertion that is developed from the best evidence.
  • Indirect evidence – This is evidence that, in combination with other facts, establishes a hypothesis. This is also known as circumstantial evidence. For example, evidence that an individual has committed similar crimes can support the assertion that the person committed the crime of which they are accused.

Evidence Collection Order

IETF RFC 3227 provides guidelines for the collection of digital evidence. It describes an order for the collection of digital evidence based on the volatility of the data. Data stored in RAM is the most volatile, and it will be lost when the device is turned off. In addition, important data in volatile memory could be overwritten by routine machine processes.
Therefore, the collection of digital evidence should begin with the most volatile evidence and proceed to the least volatile, as shown in the figure.
This image uses a downward-pointing arrow, graded in color from red to green, to assign a level of volatility to certain evidence sources.
The most volatile source listed is the contents of RAM, the source with mid-level volatility is listed as the contents of fixed disks, and the source that is listed as non-volatile is archived backup data.

Evidence Collection Priority

An example of most volatile to least volatile evidence collection order is as follows:

  1. Memory registers, caches
  2. The routing table, ARP cache, process table, kernel statistics, RAM
  3. Temporary file systems
  4. Non-volatile media, fixed and removable
  5. Remote logging and monitoring data
  6. Physical interconnections and topologies
  7. Archival media, tape or other backups

 

Details of the systems from which the evidence was collected, including who has access to those systems and at what level of permissions should be recorded. Such details should include hardware and software configurations for the systems from which the data was obtained.

Chain of Custody

Although evidence may have been gathered from sources that support attribution to an accused individual, it can be argued that the evidence could have been altered or fabricated after it was collected. In order to counter this argument, a rigorous chain of custody must be defined and followed.
Chain of custody involves the collection, handling, and secure storage of evidence. Detailed records should be kept of the following:
  • Who discovered and collected the evidence?
  • All details about the handling of evidence including times, places, and personnel involved.
  • Who has primary responsibility for the evidence, when responsibility was assigned, and when custody changed?
  • Who has physical access to the evidence while it was stored? Access should be restricted to only the most essential personnel.

Data Integrity and Preservation

When collecting data, it is important that it is preserved in its original condition. Timestamping of files should be preserved. For this reason, the original evidence should be copied, and analysis should only be conducted on copies of the original. This is to avoid accidental loss or alteration of the evidence. Because timestamps may be part of the evidence, opening files from the original media should be avoided.
The process used to create copies of the evidence that is used in the investigation should be recorded. Whenever possible, the copies should be direct bit-level copies of the original storage volumes.

It should be possible to compare the archived disc image and the investigated disk image to identify whether the contents of the investigated disk have been tampered with. For this reason, it is important to archive and protect the original disk to keep it in its original, untampered with, condition.

 

Volatile memory could contain forensic evidence, so special tools should be used to preserve that evidence before the device is shut down and evidence is lost. Users should not disconnect, unplug, or turn off infected machines unless explicitly told to do so by security personnel.

 

Following these processes will ensure that any evidence of wrongdoing will be preserved, and any indicators of compromise can be identified.

Attack Attribution

After the extent of the cyberattack has been assessed and evidence collected and preserved, incident response can move to identify the source of the attack. As we know, a wide range of threat actors exist, ranging from disgruntled individuals, hackers, cybercriminals and criminal gangs, or nation-states.
Some criminals act from inside the network, while others can be on the other side of the world. The sophistication of cybercrime varies as well. Nation-states may employ large groups of highly-trained individuals to carry out an attack and hide their tracks, while other threat actors may openly brag about their criminal activities.
Threat attribution refers to the act of determining the individual, organization, or nation responsible for a successful intrusion or attack incident.Identifying responsible threat actors should occur through the principled and systematic investigation of the evidence.
While it may be useful to also speculate as to the identity of threat actors by identifying potential motivations for an incident, it is important not to let this bias the investigation. For example, attributing an attack to a commercial competitor may lead the investigation away from the possibility that a criminal gang or nation-state was responsible.
In an evidence-based investigation, the incident response team correlates Tactics, Techniques, and Procedures (TTP) that were used in the incident with other known exploits. Cybercriminals, much like other criminals, have specific traits that are common to most of their crimes.

Threat intelligence sources can help to map the TTP identified by an investigation to known sources of similar attacks. However, this highlights a problem with threat attribution. Evidence of cybercrime is seldom direct evidence. Identifying commonalities between TTPs for known and unknown threat actors is circumstantial evidence.

 

Some aspects of a threat that can aid in attribution are the location of originating hosts or domains, features of the code used in malware, the tools used, and other techniques. Sometimes, at the national security level, threats cannot be openly attributed because doing so would expose methods and capabilities that need to be protected.

 

For internal threats, asset management plays a major role. Uncovering the devices from which an attack was launched can lead directly to the threat actor. IP addresses, MAC addresses, and DHCP logs can help track the addresses used in the attack back to a specific device. AAA logs are very useful in this regard, as they track who accessed what network resources at what time.
 

The MITRE ATTACK Framework

One way to attribute an attack is to model threat actor behavior. The MITRE Adversarial Tactics, Techniques & Common Knowledge (ATT&CK) Framework enables the ability to detect attacker tactics, techniques, and procedures (TTP) as part of threat defence and attack attribution.

 

This is done by mapping the steps in an attack to a matrix of generalized tactics and describing the techniques that are used in each tactic. Tactics consist of the technical goals that an attacker must accomplish in order to execute an attack and techniques are the means by which the tactics are accomplished.

 

Finally, procedures are the specific actions taken by threat actors in the techniques that have been identified. Procedures are the documented real-world use of techniques by threat actors.

 

The MITRE ATT&CK Framework is a global knowledge base of threat actor behaviour. It is based on observation and analysis of real-world exploits with the purpose of describing the behaviour of the attacker, not the attack itself. It is designed to enable automated information sharing by defining data structures for the exchange of information between its community of users and MITRE.

 

The figure shows an analysis of ransomware exploits from the excellent ANY.RUN online sandbox. The columns show the enterprise attack matrix tactics, with the techniques that are used by the malware arranged under the columns. Clicking the technique then lists details of the procedures that are used by the specific malware instance with a definition, explanation, and examples of the technique.

 

Note: Do an internet search on MITRE ATT&CK to learn more about the tool.

MITRE ATT&CK Matrix for a Ransomware Exploit

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

     
Fact Check Policy

Cyber Killer Chain In Cybersecurity: Facts To Know

 

The Cyber Killer Chain was developed by Lockheed Martin to identify and prevent cyber intrusions. There are seven steps to the Cyber Kill Chain. Focusing on these steps helps analysts understand the techniques, tools, and procedures of threat actors.

 

When responding to a security incident, the objective is to detect and stop the attack as early as possible in the kill chain progression. The earlier the attack is stopped; the less damage is done and the less the attacker learns about the target network.

 

The Cyber Kill Chain specifies what an attacker must complete accomplishing there goal. The steps in the Cyber Kill Chain are shown in the figure.
If the attacker is stopped at any stage, the chain of attack is broken. Breaking the chain means the defender successfully thwarted the threat actor’s intrusion. Threat actors are successful only if they complete Step 7.

Note: Threat actor is the term used throughout this course to refer to the party instigating the attack. However, Lockheed Martin uses the term “adversary” in it’s description of the Cyber Kill Chain. Therefore, the terms adversary and threat actor are used interchangeably in this topic.

The figure depicts the steps of the Cyber Kill Chain in a numbered vertical list. The steps of the Cyber Kill Chain are explained in detail in the next sections of the text.

 

Reconnaissance

Reconnaissance is when the threat actor performs research, gathers intelligence, and selects targets. This will inform the threat actor if the attack is worth performing. Any public information may help to determine the what, where, and how of the attack to be performed.
There is alot of publicly available information, especially for larger organizations including news articles, websites, conference proceedings, and public-facing network devices. Increasing amounts of information surrounding employees is available through social media outlets.
The threat actor will choose targets that have been neglected or unprotected because they will have a higher likelihood of becoming penetrated and compromised. All information obtained by the threat actor is reviewed to determine it’s importance and if it reveals possible additional avenues of attack.
The table summarizes some of the tactics and defences used during the reconnaissance step.
Adversary Tactics SOC Defenses
Plan and conduct research:
  • Harvest email addresses
  • Identify employees on social media
  • Collect all public relations information (press releases, awards, conference attendees, etc.)
  • Discover internet-facing servers
  • Conduct scans of the network to identify IP addresses and open ports.
Discover adversary’s intent:
  • Web log alerts and historical searching data
  • Data mine browser analytics
  • Build playbooks for detecting behaviour that indicates recon activity
  • Prioritize defence around technologies and people that reconnaissance activity is targeting

Weaponization

The goal of this step is to use the information from reconnaissance to develop a weapon against specific targeted systems or individuals in the organization. To develop this weapon, the designer will use the vulnerabilities of the assets that were discovered and build them into a tool that can be deployed.
After the tool has been used, it is expected that the threat actor has achieved there goal of gaining access into the target system or network, degrading the health of a target, or the entire network. The threat actor will further examine network and asset security to expose additional weaknesses, gain control over other assets, or deploy additional attacks.
It is not difficult to choose a weapon for the attack. The threat actor needs to look at what attacks are available for the vulnerabilities they have discovered. There are many attacks that have already been created and tested at large.
One problem is that because these attacks are so well known, they are most likely also known by the defenders. It is often more effective to use a zero-day attack to avoid detection methods. A zero-day attack uses a weapon that is unknown to defenders and network security systems.
The threat actor may wish to develop there own weapon that is specifically designed to avoid detection, using the information about the network and systems that they have learned. Attackers have learned how to create numerous variants of there attacks in order to evade network defences.
The table summarizes some of the tactics and defences used during the weaponization step.
Adversary Tactics SOC Defense
Prepare and stage the operation:
  • Obtain an automated tool to deliver the malware payload (weaponizer).
  • Select or create a document to present to the victim.
  • Select or create a backdoor and command and control infrastructure.
Detect and collect weaponization artefacts:
  • Ensure that IDS rules and signatures are up to date.
  • Conduct full malware analysis.
  • Build detections for the behaviour of known weaponizers.
  • Is malware old, “off the shelf” or new malware that might indicate a tailored attack?
  • Collect files and metadata for future analysis.
  • Determine which weaponizer artefacts are common to which campaigns.

Delivery

During this step, the weapon is transmitted to the target using a delivery vector. This may be through the use of a website, removable USB media, or an email attachment. If the weapon is not delivered, the attack will be unsuccessful.
The threat actor will use many different methods to increase the odds of delivering the payload such as encrypting communications, making the code look legitimate, or obfuscating the code.
Security sensors are so advanced that they can detect the code as malicious unless it is altered to avoid detection. The code may be altered to seem innocent, yet still perform the necessary actions, even though it may take longer to execute.
The table summarizes some of the tactics and defences used during the delivery step.
Adversary Tactics SOC Defense
Launch malware at target:
  • Direct against web servers
  • Indirect delivery through:
    • Malicious email
    • Malware on a USB stick
    • Social media interactions
    • Compromised websites
Block delivery of malware:
  • Analyze the infrastructure path used for delivery.
  • Understand targeted servers, people, and data available to attack.
  • Infer intent of the adversary based on targeting.
  • Collect email and web logs for forensic reconstruction.

Exploitation

After the weapon has been delivered, the threat actor uses it to break the vulnerability and gain control of the target. The most common exploit targets are applications, operating system vulnerabilities, and users. The attacker must use an exploit that gains the effect they desire.
This is very important because if the wrong exploit is conducted, obviously the attack will not work, but unintended side effects such as a DoS or multiple system reboots will cause undue attention that could easily inform cybersecurity analysts of the attack and the threat actor’s intentions.
The table summarizes some of the tactics and defences used during the exploitation step.
Adversary Tactics SOC Defense
Exploit a vulnerability to gain access:
  • Use software, hardware, or human vulnerability
  • Acquire or develop the exploit
  • Use an adversary-triggered exploit for server vulnerabilities
  • Use a victim-triggered exploit such as opening an email attachment or malicious weblink
Train employees, secure code, and harden devices:
  • Employee security awareness training and periodic email testing
  • Web developer training for securing code
  • Regular vulnerability scanning and penetration testing
  • Endpoint hardening measures
  • Endpoint auditing to forensically determine the origin of exploit

Installation

This step is where the threat actor establishes a back door into the system to allow for continued access to the target. To preserve this backdoor, it is important that remote access does not alert cybersecurity analysts or users.
The access method must survive through antimalware scans and rebooting of the computer to be effective. This persistent access can also allow for automated communications, especially effective when multiple channels of communication are necessary when commanding a botnet.
The table summarizes some of the tactics and defences used during the installation step.
Adversary Tactics SOC Defense
Install persistent backdoor:
  • Install webshell on a web server for persistent access.
  • Create a point of persistence by adding services, AutoRun keys, etc.
  • Some adversaries modify the timestamp of the malware to make it appear as part of the operating system.
Detect, log, and analyze installation activity:
  • HIPS to alert or block common installation paths.
  • Determine if malware requires elevated privileges or user privileges
  • Endpoint auditing to discover abnormal file creations.
  • Determine if malware is a known threat or a new variant.

Command and Control

In this step, the goal is to establish command and control (CnC or C2) with the target system. Compromised hosts usually beacon out of the network to a controller on the internet. This is because most malware requires manual interaction in order to exfiltrate data from the network.
CnC channels are used by the threat actor to issue commands to the software that they installed on the target.
The cybersecurity analyst must be able to detect CnC communications in order to discover the compromised host. This may be in the form of unauthorized Internet Relay Chat (IRC) traffic or excessive traffic to suspect domains.
The table summarizes some of the tactics and defences used during the command and control step.
Adversary Tactics SOC Defense
Open channel for target manipulation:
  • Open two-way communications channel to CNC infrastructure
  • Most common CNC channels over the web, DNS, and email protocols
  • CnC infrastructure may be adversary owned or another victim network itself
Last chance to block operation:
  • Research possible new CnC infrastructures
  • Discover CnC infrastructure through malware analysis
  • Isolate DNS traffic to suspect DNS servers, especially Dynamic DNS
  • Prevent impact by blocking or disabling the CnC channel
  • Consolidate the number of internet points of presence
  • Customize rules blocking of CnC protocols on web proxies

Actions on Objectives

The final step of the Cyber Kill Chain describes the threat actor achieving there original objective. This may be data theft, performing a DDoS attack, or using the compromised network to create and send spam or mine Bitcoin. At this point the threat actor is deeply rooted in the systems of the organization, hiding there moves and covering there tracks. It is extremely difficult to remove the threat actor from the network.
The table summarizes some of the tactics and defences used during the actions on the objectives step.
Adversary Tactics SOC Defense
Reap the rewards of a successful attack:
  • Collect user credentials
  • Privilege escalation
  • Internal reconnaissance
  • Lateral movement through an environment
  • Collect and exfiltrate data
  • Destroy systems
  • Overwrite, modify, or corrupt data
Detect by using forensic evidence:
  • Establish incident response playbook
  • Detect data exfiltration, lateral movement, and unauthorized credential usage
  • Immediate analyst response for all alerts
  • Forensic analysis of endpoints for rapid triage
  • Network packet captures to recreate the activity
  • Conduct damage assessment

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

     
Fact Check Policy

Understanding Diamond Model Of Intrusion Analysis

 

The Diamond Model of Intrusion Analysis is made up of four parts, as shown in the figure. The model represents a security incident or event. In the Diamond Model, an event is a time-bound activity that is restricted to a specific step in which an adversary uses a capability over infrastructure to attack a victim to achieve a specific result.
The four core features of an intrusion event are adversary, capability, infrastructure, and victim:
 
  • Adversary – These are the parties responsible for the intrusion.
  • Capability – This is a tool or technique that the adversary uses to attack the victim.
  • Infrastructure – This is the network path or paths that the adversaries use to establish and maintain command and control over their capabilities.
  • Victim – This is the target of the attack. However, a victim might be the target initially and then used as part of the infrastructure to launch other attacks.

 
The adversary uses capabilities over infrastructure to attack the victim. The model can be interpreted as saying, “The adversary uses the infrastructure to connect to the victim. The adversary develops a capability to exploit the victim.” For example, a capability like malware might be used over the email infrastructure by an adversary to exploit a victim.

 

Meta-features expand the model slightly to include the following important elements:

  • Timestamp – This indicates the start and stop time of an event and is an integral part of grouping malicious activity.
  • Phase – This is analogous to steps in the Cyber Kill Chain; malicious activity includes two or more steps executed in succession to achieve the desired result.
  • Result – This delineates what the adversary gained from the event. Results can be documented as one or more of the following: confidentiality compromised, integrity compromised, and availability compromised.
  • Direction – This indicates the direction of the event across the Diamond Model. These include Adversary-to-Infrastructure, Infrastructure-to-Victim, Victim-to-Infrastructure, and Infrastructure-to-Adversary.
  • Methodology – This is used to classify the general type of event, such as port scan, phishing, content delivery attack, syn flood, etc.
  • Resources – These are one or more external resources used by the adversary for the intrusion event, such as software, adversary’s knowledge, information (e.g., username/passwords), and assets to carry out the attack (hardware, funds, facilities, network access).

 

The figure depicts the Diamond Model as a line drawn diamond. The core features of an intrusion event are located at each of the corners of the diamond. An adversary is placed on the top, infrastructure is on the left, the victim is on the bottom, and capability is on the right. There are arrows pointing away from the word adversary at the top to the words infrastructure and capability on the sides, and then arrows pointing from infrastructure and capability to the word victim on the bottom.
The arrows are used to describe the interaction between the core features. The adversary uses the infrastructure to connect to the victim, and the adversary develops a capability to exploit the victim. Within the diamond is an arrow connecting the adversary and victim and an arrow connecting infrastructure and capability. In the top left of the image is a text list of the Meta-Features; Timestamp, Phase, Result, Direction, Methodology, and Resources.

The Diamond Model

Pivoting Across the Diamond Model

As a cybersecurity analyst, you may be called on to use the Diamond Model of Intrusion Analysis to diagram a series of intrusion events. The Diamond Model is ideal for illustrating how the adversary pivots from one event to the next.
For example, in the figure, an employee reports that his computer is acting abnormally. A host scan by the security technician indicates that the computer is infected with malware. An analysis of the malware reveals that the malware contains a list of CnC domain names. These domain names resolve to a list of IP addresses. These IP addresses are then used to identify the adversary, as well as investigate logs to determine if other victims in the organization are using the CnC channel.
The figure depicts the Diamond Model’s Characterization of an exploit. The diamond with the core features is shown, and there are numbered steps with arrows connecting the various core features. Step one connects the victim to the capability, and has the note Victim discovers malware. Step 2 connects the capability and infrastructure, and has the note Malware contains CnC domain. Step 3 has an arrow arched out from infrastructure to the note CnC Domain resolves to CnC IP address. Step 4 connects infrastructure to a victim with the note Firewall logs reveal further victims contacting CnC IP address. Step 5 connects infrastructure to an adversary, with the note IP address ownership details reveal adversary

Diamond Model Characterization of an Exploit

The Diamond Model and the Cyber Kill Chain

Adversaries do not operate in just a single event. Instead, events are threaded together in a chain in which each event must be successfully completed before the next event. This thread of events can be mapped to the Cyber Kill Chain previously discussed in the chapter.
The following example, shown in the figure, illustrates the end-to-end process of an adversary as they vertically traverse the Cyber Kill Chain, use a compromised host to horizontally pivot to another victim, and then begin another activity
thread:1. Adversary conducts a web search for victim company Gadgets, Inc. receiving as part of the results the domain name gadgets.com.
2. Adversary uses the newly discovered domain gadets.com for a new search “network administrator gadget.com” and discovers forum postings from users claiming to be network administrators of gadget.com. The user profiles reveal their email addresses.
3. Adversary sends phishing emails with a Trojan horse attached to the network administrators of gadget.com.
4. One network administrator (NA1) of gadget.com opens the malicious attachment. This executes the enclosed exploit allowing for further code execution.
5. NA1’s compromised host sends an HTTP Post message to an IP address, registering it with a CnC controller. NA1’s compromised host receives an HTTP Response in return.
6. It is revealed from reverse engineering that the malware has additional IP addresses configured which act as a back-up if the first controller does not respond.
7. Through a CnC HTTP response message sent to NA1’s host, the malware begins to act as a web proxy for new TCP connections.
8. Through information from the proxy that is running on NA1’s host, Adversary does a web search for “most important research ever” and finds Victim 2, Interesting Research Inc.
9. Adversary checks NA1’s email contact list for any contacts from Interesting Research Inc. and discovers the contact for the Interesting Research Inc. Chief Research Officer.
10. Chief Research Officer of Interesting Research Inc. receives a spear-phish email from Gadget Inc.’s NA1’s email address sent from NA1’s host with the same payload as observed in Event 3.
The adversary now has two compromised victims from which additional attacks can be launched. For example, the adversary could mine the Chief Research Officer’s email contacts for the additional potential victims. The adversary might also set up another proxy to exfiltrate all of the Chief Research Officer’s files.
Note: This example is a modification of the U.S. Department of Defense’s example in the publication “The Diamond Model of Intrusion Analysis”.
Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNAIJA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

     
Fact Check Policy

How To Establish Incident Response Capability

 

Incident Response involves the methods, policies, and procedures that are used by an organization to respond to a cyber attack. The aims of incident response are to limit the impact of the attack, assess the damage caused, and implement recovery procedures.
Because of the potential large-scale loss of property and revenue that can be caused by cyber-attacks, it is essential that organizations create and maintain detailed incident response plans and designate personnel who are responsible for executing all aspects of that plan. In this article, I want to talk about some of the ways to establish Incident Response Capability in cybersecurity. 

 

The U.S. National Institute of Standards and Technology (NIST) recommendations for incident response are detailed in their Special Publication 800-61, revision 2 entitled “Computer Security Incident Handling Guide,”

 

Note: Although this chapter summarizes much of the content in the NIST 800-61r2 standard, you should be familiar with the entire publication as it covers four major exam topics for the Understanding Cisco Cybersecurity Operations Fundamentals exam.
The NIST 800-61r2 standard provides guidelines for incident handling, particularly for analyzing incident-related data, and determining the appropriate response to each incident. The guidelines can be followed independently of particular hardware platforms, operating systems, protocols, or applications.
The first step for an organization is to establish a computer security incident response capability (CSIRC). NIST recommends creating policies, plans, and procedures for establishing and maintaining a CSIRC.
Policy Elements
An incident response policy details how incidents should be handled based on the organization’s mission, size, and function. The policy should be reviewed regularly to adjust it to meet the goals of the roadmap that has been laid out. Policy elements include the following:
  • Statement of management commitment
  • Purpose and objectives of the policy
  • Scope of the policy
  • Definition of computer security incidents and related terms
  • Organizational structure and definition of roles, responsibilities, and levels of authority
  • Prioritization of severity ratings of incidents
  • Performance measures
  • Reporting and contact forms

Plan Elements
A good incident response plan helps to minimize damage caused by an incident. It also helps to make the overall incident response program better by adjusting it according to lessons learned. It will ensure that each party involved in the incident response has a clear understanding of not only what they will be doing, but what others will be doing as well. Plan elements are as follows:

  • Mission
  • Strategies and goals
  • Senior management approval
  • An organizational approach to incident response
  • How the incident response team will communicate with the rest of the organization and with other organizations
  • Metrics for measuring the incident response capacity
  • How the program fits into the overall organization

 

Procedure Elements

The procedures that are followed during an incident response should follow the incident response plan. Procedures elements are as follows:

  • Technical processes
  • Using techniques
  • Filling out forms,
  • Following checklists

These are typical standard operating procedures (SOPs). These SOPs should be detailed so that the mission and goals of the organization are in mind when these procedures are followed. SOPs minimize errors that may be caused by personnel that are under stress while participating in incident handling. It is important to share and practice these procedures, making sure that they are useful, accurate, and appropriate.

Incident Response Stakeholders

Other groups and individuals within the organization may also be involved with incident handling. It is important to ensure that they will cooperate before an incident is underway. Their expertise and abilities can help the Computer Security Incident Response Team (CSIRT) to handle the incident quickly and correctly. These are some of the stakeholders that may be involved in handing a security incident:

 

  • Management – Managers create the policies that everyone must follow. They also design the budget and are in charge of staffing all of the departments. Management must coordinate the incident response with other stakeholders and minimize the damage of an incident.
  • Information Assurance – This group may need to be called in to change things such as firewall rules during some stages of incident management such as containment or recovery.
  • IT Support – This is the group that works with the technology in the organization and understands it the most. Because IT support has a deeper understanding, it is more likely that they will perform the correct action to minimize the effectiveness of the attack or preserve evidence properly.
  • Legal Department – It is a best practice to have the legal department review the incident policies, plans, and procedures to make sure that they do not violate any local or federal guidelines. Also, if any incident has legal implications, a legal expert will need to become involved. This might include prosecution, evidence collection, or lawsuits.
  • Public Affairs and Media Relations – There are times when the media and the public might need to be informed of an incident, such as when their personal information has been compromised during an incident.
  • Human Resources – The human resources department might need to perform disciplinary measures if an incident caused by an employee occurs.
  • Business Continuity Planners – Security incidents may alter an organization’s business continuity. It is important that those in charge of business continuity planning are aware of security incidents and the impact they have had on the organization as a whole. This will allow them to make any changes in plans and risk assessments.
  • Physical Security and Facilities Management – When a security incident happens because of a physical attack, such as tailgating or shoulder surfing, these teams might need to be informed and involved. It is also their responsibility to secure facilities that contain evidence from an investigation.

 

The Cybersecurity Maturity Model Certification
The Cybersecurity Maturity Model Certification (CMMC) framework was created to assess the ability of organizations that perform functions for the U.S. Department of Defense (DoD) to protect the military supply chain from disruptions or losses due to cybersecurity incidents. Security breaches related to DoD information indicated that NIST standards were not sufficient to mitigate against the increasing and evolving threat landscape, especially from nation-state treat actors. In order for companies to receive contracts from the DoD, those companies must be certified. The certification consists of five levels, with different levels required depending on the degree of security required by the project.

The CMMC specifies 17 domains, each of which has a varying number of capabilities that are associated with it. The organization is rated by the maturity level that has been achieved for each of the domains. One of the domains concerns incident response. The capabilities that are associated with the incident response domain are as follows:

  • Plan incident response
  • Detect and report events
  • Develop and implement a response to a declared incident
  • Perform post-incident reviews
  • Test incident response

The CMMC certifies organizations by level. For most domains, there are five levels, however, for incident response, there are only four. The higher the level that is certified, the more mature the cybersecurity capability of the organization. A summary of the incidence response domain maturity levels is shown below.

  • Level 2 – Establish an incident response plan that follows the NIST process. Detect, report, and prioritize events. Respond to events by following predefined procedures. Analyze the cause of incidents in order to mitigate future issues.
  • Level 3 – Document and report incidents to stakeholders that have been identified in the incident response plan. Test the incident response capability of the organization.
  • Level 4 – Use knowledge of attacker tactics, techniques, and procedures (TPT) to refine incident response planning and execution. Establish a security operation center (SOC) that facilitates a 24/7 response capability.
  • Level 5 – Utilize accepted and systematic computer forensic data gathering techniques including the secure handling and storage of forensic data. Develop and utilize manual and automated real-time responses to potential incidents that follow known patterns.

NIST Incident Response Life Cycle

NIST defines four steps in the incident response process life cycle, as shown in the figure.

  • Preparation – The members of the CSIRT are trained in how to respond to an incident. CSIRT members should continual develop knowledge of emerging threats.
  • Detection and Analysis – Through continuous monitoring, the CSIRT quickly identifies, analyzes, and validates an incident.
  • Containment, Eradication, and Recovery – The CSIRT implements procedures to contain the threat, eradicate the impact on organizational assets, and use backups to restore data and software. This phase may cycle back to detection and analysis to gather more information, or to expand the scope of the investigation.
  • Post-Incident Activities – The CSIRT then documents how the incident was handled, recommends changes for future response, and specifies how to avoid a reoccurrence.

The incident response life cycle is meant to be a self-reinforcing learning process whereby each incident informs the process for handling future incidents. Each of these phases are discussed in more detail in this topic.

The image depicts the NIST incident response cycle, with arrows showing the normal workflow and feedback in an incident response.

Incident Response Life Cycle

Preparation

The preparation phase is when the CSIRT is created and trained. This phase is also when the tools and assets that will be needed by the team to investigate incidents are acquired and deployed. The following list has examples of actions that also take place during the preparation phase:

  • Organizational processes are created to address communication between people on the response team. This includes such things as contact information for stakeholders, other CSIRTs, and law enforcement, an issue tracking system, smartphones, encryption software, etc.
  • Facilities to host the response team and the SOC are created.
  • Necessary hardware and software for incident analysis and mitigation is acquired. This may include forensic software, spare computers, servers and network devices, backup devices, packet sniffers, and protocol analyzers.
  • Risk assessments are used to implement controls that will limit the number of incidents.
  • Validation of security hardware and software deployment is performed on end-user devices, servers, and network devices.
  • User security awareness training materials are developed.

Additional incident analysis resources might be required. Examples of these resources are a list of critical assets, network diagrams, port lists, hashes of critical files, and baseline readings of system and network activity. Mitigation software is also an important item when preparing to handle a security incident. An image of a clean OS and application installation files may be needed to recover a computer from an incident.
Often, the CSIRT may have a jump kit prepared. This is a portable box with many of the items listed above to help in establishing a swift response. Some of these items may be a laptop with appropriate software installed, backup media, and any other hardware, software, or information to help in the investigation. It is important to inspect the jump kit on a regular basis to install updates and make sure that all the necessary elements are available and ready for use. It is helpful to practice deploying the jump kit with the CSIRT to ensure that the team members know how to use its contents properly.

The same boxes as the previous section are shown with the preparation box highlighted.

Preparation Phase

Detection and Analysis

The same boxes as the previous section are shown with the detection and analysis box highlighted.

Detection & Analysis Phase

Because there are so many different ways in which a security incident can occur, it is impossible to create instructions that completely cover each step to follow to handle them. Different types of incidents will require different responses.
Attack Vectors
Detection
Analysis
Scoping
Incident notification

An organization should be prepared to handle any incident but should focus on the most common types of incidents so that they can be dealt with swiftly. These are some of the more common types of attack vectors:

  • Web – Any attack that is initiated from a website or application hosted by a website.
  • Email – Any attack that is initiated from an email or email attachment.
  • Loss or Theft – Any equipment that is used by the organization such as a laptop, desktop, or smartphone can provide the required information for someone to initiate an attack.
  • Impersonation – When something or someone is replaced for the purpose of malicious intent.
  • Attrition – Any attack that uses brute force to attack devices, networks, or services.
  • Media – Any attack that is initiated from external storage or removable media.

Containment, Eradication, and Recovery

The same boxes as the previous section are shown with the containment, eradication, and recovery box highlighted.

Containment, Eradication, and Recovery Phase

After security incident has been detected and sufficient analysis has been performed to determine that the incident is valid, it must be contained in order to determine what to do about it. Strategies and procedures for incident containment need to be in place before an incident occurs and implemented before there is widespread damage.

For every type of incident, a containment strategy should be created and enforced. These are some conditions to determine the type of strategy to create for each incident type:

  • How long it will take to implement and complete a solution?
  • How much time and how many resources will be needed to implement the strategy?
  • What is the process to preserve evidence?
  • Can an attacker be redirected to a sandbox so that the CSIRT can safely document the attacker’s methodology?
  • What will be the impact to the availability of services?
  • What is the extent of damage to resources or assets?
  • How effective is the strategy?

During containment, additional damage may be incurred. For example, it is not always advisable to unplug the compromised host from the network. The malicious process could notice this disconnection to the CnC controller and trigger a data wipe or encryption on the target. This is where experience and expertise can help to contain an incident beyond the scope of the containment strategy.

Post-Incident Activities

The same boxes as the previous section are shown with the post-incident activity box highlighted.

Post-Incident Activity Phase

After incident response activities have eradicated the threats and the organization has begun to recover from the effects of the attack, it is important to take a step back and periodically meet with all of the parties involved to discuss the events that took place and the actions of all of the individuals while handling the incident. This will provide a platform to learn what was done right, what was done wrong, what could be changed, and what should be improved upon.
Lessons-based hardening
After a major incident has been handled, the organization should hold a “lessons learned” meeting to review the effectiveness of the incident handling process and identify necessary hardening needed for existing security controls and practices. Examples of good questions to answer during the meeting include the following:
  • Exactly what happened, and when?
  • How well did the staff and management perform while dealing with the incident?
  • Were the documented procedures followed? Were they adequate?
  • What information was needed sooner?
  • Were any steps or actions taken that might have inhibited the recovery?
  • What would the staff and management do differently the next time a similar incident occurs?
  • How could information sharing with other organizations be improved?
  • What corrective actions can prevent similar incidents in the future?
  • What precursors or indicators should be watched for in the future to detect similar incidents?
  • What additional tools or resources are needed to detect, analyze, and mitigate future incidents?

Incident Data Collection and Retention

By having ‘lessons learned’ meetings, the collected data can be used to determine the cost of an incident for budgeting reasons, as well as to determine the effectiveness of the CSIRT, and identify possible security weaknesses throughout the system. The collected data needs to be actionable. Only collect data that can be used to define and refine the incident handling process.
A higher number of incidents handled can show that something in the incidence response methodology is not working properly and needs to be refined. It could also show incompetence in the CSIRT. A lower number of incidents might show that network and host security has been improved. It could also show a lack of incident detection. Separate incident counts for each type of incident may be more effective at showing strengths and weakness of the CSIRT and implemented security measures. These subcategories can help to target where a weakness resides, rather than whether there is a weakness at all.
The time of each incident provides insight into the total amount of labor used and the total time of each phase of the incident response process. The time until the first response is also important, as well as how long it took to report the incident and escalate it beyond the organization, if necessary.
It is important to perform an objective assessment of each Incident. The response to an incident that has been resolved can be analyzed to determine how effective it was. NIST Special Publication 800-61 provides the following examples of activates that are performed during an objective assessment of an incident:
  • Reviewing logs, forms, reports, and other incident documentation for adherence to established incident response policies and procedures.
  • Identifying which precursors and indicators of the incident were recorded to determine how effectively the incident was logged and identified.
  • Determining if the incident caused damage before it was detected.
  • Determining if the actual cause of the incident was identified, and identifying the vector of attack, the vulnerabilities exploited, and the characteristics of the targeted or victimized systems, networks, and applications.
  • Determining if the incident is a recurrence of a previous incident.
  • Calculating the estimated monetary damage from the incident (e.g., information and critical business processes negatively affected by the incident).
  • Measuring the difference between the initial impact assessment and the final impact assessment.
  • Identifying which measures, if any, could have prevented the incident.
  • Subjective assessment of each incident requires that incident response team members assess their own performance, as well as that of other team members and of the entire team. Another valuable source of input is the owner of a resource that was attacked, in order to determine if the owner thinks the incident was handled efficiently and if the outcome was satisfactory.

There should be a policy in place in each organization that outlines how long evidence of an incident is retained. Evidence is often retained for many months or many years after an incident has taken place. In some cases, compliance regulations may mandate the retention period. These are some of the determining factors for evidence retention:

  • Prosecution – When an attacker will be prosecuted because of a security incident, the evidence should be retained until after all legal actions have been completed. This may be several months or many years. In legal actions, no evidence should be overlooked or considered insignificant. An organization’s policy may state that any evidence surrounding an incident that has been involved with legal actions must never be deleted or destroyed.
  • Data Type – An organization may specify that specific types of data should be kept for a specific period of time. Items such as email or text may only need to be kept for 90 days. More important data such as that used in an incident response (that has not had legal action), may need to be kept for three years or more.
  • Cost – If there is a lot of hardware and storage media that needs to be stored for a long time, it can become costly. Remember also that as technology changes, functional devices that can use outdated hardware and storage media must be stored as well.

Reporting Requirements and Information Sharing

Governmental regulations should be consulted by the legal team to determine precisely the organization’s responsibility for reporting the incident. In addition, management will need to determine what additional communication is necessary with other stakeholders, such as customers, vendors, partners, etc.
Beyond the legal requirements and stakeholder considerations, NIST recommends that an organization coordinate with organizations to share details for the incident. For example, the organization could log the incident in the VERIS community database.
The critical recommendations from NIST for sharing information are as follows:
  • Plan incident coordination with external parties before incidents occur.
  • Consult with the legal department before initiating any coordination efforts.
  • Perform incident information sharing throughout the incident response life cycle.
  • Attempt to automate as much of the information sharing process as possible.
  • Balance the benefits of information sharing with the drawbacks of sharing sensitive information.

Share as much of the appropriate incident information as possible with other organizations.

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

     
Fact Check Policy

 

 

 

8 Expert Tips To Clear CEH Exam In First Attempt

 

With the Covid 19 pandemic ravaging the country, more and more businesses have had to shift online. While many businesses struggle to get their feet off the ground in the online sphere, organizations that help in IT training and obtaining ethical hacker certification have been around the block.

 

Whether you are getting your ethical hacker certification or an Azure certification, these online educational centres have you covered with all the prep material and concepts you need. Not a stranger to online learning; they have perfected their course material before teaching you.

8 Things To Know If You Want To Pass The CEH Examination

Given below are the top 8 things you need to know, which are also the tips that will help you clear the CEH examination successfully.

Practical Knowledge 

Using real-world situations to help study will ensure you understand the concepts better. Thus, allowing you to grasp them more fully and recall them quickly during the cybersecurity certifications examination.

Predict The Pattern And Study Accordingly

The test aims to make candidates display essential skills as per the syllabus and then evaluate them. The difficulty of the tests is determined by leading ethical hackers in their field and the criteria for passing.

There are many websites available online that focus on predicting the pattern. While they may not be accurate, they can give you a clue as to how the paper will be structured. Studying according to the pattern, which means giving more attention to areas with higher mark weightage, is how you will pass the exam.

#1 Stay Focused

Staying focused will significantly benefit you, especially if you are balancing a full-time job while studying for the examination. Setting deadlines and completing self-made assignments within that limited time is one of the best ways to make sure you are on track.

 

These are just a few simple ways you can be well prepared for your exam if you don’t want to take assistance from a training centre or take up a related course. If you have time, you can take up these tips along with learning from an online or offline course as well.

#2 Make A Study Plan And A Study Group

The process of studying and applying for the ethical hacker certification can cost quite a bit. So whether you are going with a training partner or not, you should consider making a study plan. Being disciplined while following this plan will only help you prep properly.

Another method that has a lot of success is making study groups with other candidates applying for the same exam. Exchanging notes and clearing doubts with them is much more helpful than any study material you can buy.

#3 CEH Exam Pattern

While preparing, it is crucial to study the exam while also covering the syllabus. Sometimes knowing is not enough; you have to budget in time to finish the exam with time to spare.
The CEH exam offered by the EC council is an MCQ test of 125 questions. The test duration is 4 hours, and all the 125 MCQ questions have to be answered.

#4 Practice Exams

Practice exams are an essential tool when prepping for any exam; they let you know your strong points and your weak points, ensuring you spend adequate time on each. Taking practice tests during this time will also help you see which areas your knowledge is lacking in.

#5 Start Early

Sometimes it just boils down to who is better prepared and smarter and not who has understood the concepts better. The extra time you spend in prep will significantly benefit you when the time comes.

#6 Ask For Help

Do not be afraid to ask for help, especially when you need it. Either from your IT training partner or an online community of aspiring ethical hackers.

Conclusion

Going with Koenig Solutions as your IT training partner is one of the best decisions you can make for yourself. Not only do they provide extensive study material which will help you pass the exam, but through their practical approach, they give you a significant boost to get your career of ethical hacking off the ground.
The ethical hacking course offered by Koenig Solutions is specially crafted for those juggling work and trying to be a CEH for maximum knowledge retention.

 

 

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training. 

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNAIJA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

 

The Ultimate Online Privacy Guide for Journalists

 

As a journalist in 2021, the dangers you face are ever-increasing. Without the proper protection from online threats, you risk hackers stealing confidential information, exposing your sources, breaking anonymity, and getting hold of your unpublished stories. You’d be a prime victim for blackmail — or worse.

 

Some of these dangers can even be extreme and life-threatening. According to UNESCO, 495 journalists were killed between 2014 and 2018, which is an 18% increase compared to the latest 5-year period. In addition, more journalists are being murdered in non-conflict zones than from within conflict zones. Out of all the journalists killed in 2018, 33% were TV journalists, 26% were print journalists, and journalists in online media formed a significant 15%.

Being based in a non-conflict zone no longer assures your safety like it used to. Working online from behind a screen can’t guarantee you’ll be protected either. As the journalism industry trend continues to move from print to digital, it is imperative to your well-being — and that of your colleagues and sources — that you put the appropriate online privacy safeguards in place.

Securing your software and hardware with the right encryption tools can help you keep your confidential files and sensitive information from falling into the wrong hands. It’s also vital to utilize privacy-enhancing software to protect your anonymity and that of all your contacts.

Below are 12 ways you can protect your work, your sources, and yourself in 2021.

#1 Send Messages to Sources on Secure Apps

Whenever you contact someone by text or voice, there’s a chance your messages will be intercepted or that a third-party is listening to your phone call. By using a messaging app with strong encryption, you no longer need to worry about unwanted eavesdropping.

Even though many messaging apps have basic encryption — and some even promise end-to-end encryption — you need to be careful which apps you choose. For example, WhatsApp claims to have end-to-end encryption, but it’s parent company Facebook has an extremely poor reputation for protecting it’s users’ privacy.

Thankfully, there are secure messaging apps available. These apps will ensure safe communication between you and your sources or colleagues.

  • Signal — Strong combination of end-to-end encryption and extremely limited logs. This was confirmed in a court case, when all the company could produce was the time of account creation and the last log on.
  • Telegram — Provides encryption, self-destructing messages, and two-factor authentication.
  • Threema — Threema doesn’t use your phone number but instead creates an anonymous ID for you. This is great for talking to someone whom you don’t want to directly give your number to.

 

 

Besides top-of-the-line encryption, the Signal app offers some additional privacy features

While all of the apps above offer end-to-end encryption and other security features, Signal and Telegram require your phone number. While the number is hashed and anonymized, I recommend using a brand new number that isn’t linked to you to sign up.

#2 Secure Your Email to Protect Confidential Files

Emails are mostly likely your main point of communication, aside from instant messaging. Unfortunately, major email providers are still failing to provide standard encryption options. This means your emails can be intercepted and read, especially at the recipient’s end.

If you’re planning to send or receive potentially sensitive data, it is best to sign up to secure an email service.
Tutanota is one of the best options available as it encrypts everything — including the subject line, text, attachments, and even your address book.

 

Use an encrypted email service like Tutanota, which encrypts all aspects of sent emails

Tutanota has a free and paid version, with storage space ranging from 1GB to 1TB.

When emailing someone using a mainstream email provider, Tutanota will send them a message with a link to enter a password and unlock your message. This keeps your emails anonymous in all circumstances. I like Tutanota as it’s constantly being improved by developers and privacy experts, and includes a free mobile client.

Other popular email services like ProtonMail and Hushmail offer many of the same features. However, Tutanota is the only one to encrypt the entirety of an email.

#3  Encrypt All Your Devices

You need to ensure that the devices you and your sources use are encrypted. This includes your computers, tablets, phones, and external hard drives.
There are three ways to protect your sensitive data on your devices:

  1. Full-disk encryption (FDE): This is the most secure way to encrypt your device. Your disk will only be accessible with a password or PIN code. If your drive contains unfinished reporting and other sensitive files, this step is vital to take.
    You can use BitLocker if you’re a Windows user, and macOS users can use FileVault. You can also use BitLocker to encrypt your external drives, such as USB drives and memory cards. macOS users do this via Disk Utility.
  2. Encrypting specific files or folders: You can use open-source software, like VeraCrypt, to encrypt individual files and folders.
  3. Air gapping: This is a basic form of protection where you keep your device disconnected at all times. However, this method is only effective if your system is perpetually offline — this means no Bluetooth or NFC either.

 

#4 Visit Websites Starting with HTTPS

What’s the difference between HTTP and HTTPS? HTTP doesn’t encrypt data sent between your browser and a website — but HTTPS does. This is essential if you’re interacting with sensitive data, need to log in with private credentials, or are making financial transactions.

An easy way to check if you’re on a secure page is to look at the address bar. Is the site you’re on using “http://” or “https://”? You should also pay attention to the small padlock symbol to the left of the address bar. This can show you if a connection isn’t secure, even if you appear to be on an HTTPS page.

 

Ensure your browser uses HTTPS and has a secure connection

#5  Use a Private Internet Browser

It’s no secret that many web browsers store your personal, private, and financial information. If you are reporting on a sensitive case, this data could be intercepted by hackers, the government, or other unwanted third-parties.
Don’t forget that it’s not just about securing your work, but ensuring you won’t be silenced in other ways. If you have particular online interests that you don’t want others knowing about, such as your porn viewing habits, these could be used against you as blackmail.

Using your browser’s private or incognito mode won’t ensure your privacy, as your ISP and other parties can still track what you’re doing. I recommend combining a private browser with a VPN, or using a more private browser.
Here are my recommended private browsers:

TOR (The Onion Router)

TOR is the most private option on this list. It was originally developed by the US Navy for anonymous communication.

TOR encrypts your data and “bounces” it randomly around the world via a network of volunteer relays. Additionally, all your requests are routed via HTTPS (a more secure protocol), no scripts are loaded
d (making you harder to track), and you’re permanently in incognito mode (so none of your browsing data is stored).
This way, you can research and communicate freely without worrying about being tracked, and even navigate the dark web.

Firefox

Run by the non-profit Mozilla organization, Firefox has an built-in tracker blocking and is monetized almost entirely via royalties from partnerships and distribution deals. This means none of your browsing data is sold to third-party companies to make money.

I highly recommend using the DuckDuckGo search engine in Firefox, which doesn’t store any data or track your activities. While DuckDuckGo doesn’t present as many search results as Google, it’s still a much safer option for performing anonymous research.

You can also use Firefox’s “Containers” add-on to separate your browsing activity without having to clear your history, log in and out, or use multiple browsers.

Brave Browser

Brave is based on Chromium, the open-source code powering Chrome. However, aside from the core code, the two browsers are very different. Unlike Chrome, Brave automatically blocks cross-site trackers, adverts, and upgrades your connection to HTTPS. This helps you remain more anonymous online.

 

Brave automatically blocks trackers and upgrades your connection to HTTPS
Warning! Avoid Chrome as Your Internet Browser
Google’s business model relies on collecting vast quantities of personal data and monetizing it. While this isn’t inherently dangerous, the data stored can be obtained by malicious third parties or law enforcement officials. In fact, you should avoid all of Google’s services to avoid your data being leaked.
Instead, you should consider signing up for their Advanced Protection scheme. This adds more defences and restrictions to your Google account and associated apps. It was specifically created by Google for journalists, activists, business leaders, and political campaign teams.
[embedyt] https://www.youtube.com/watch?v=a1i-3xwcSGA[/embedyt]

#6  Stay Anonymous Online with a VPN

A VPN, or Virtual Private Network, is a global network of servers that offer an anonymous and encrypted internet connection. The encryption hides all your online activities so that it is unidentifiable and no one can trace your browsing activities back to you.

A VPN is extremely useful if you’re reporting from countries where government surveillance is a threat. It helps you work under the radar of the country’s surveillance technology, keeping your traffic and online activities private.

 

Use a VPN to encrypt your traffic and avoid censorship by connecting to servers around the world

Before connecting to a VPN server, you should first check if the server is hosted in a country that is part of any intelligence-sharing alliances. These agreements may result in VPN providers being forced to hand over your online information to the government.

There are currently three significant alliances involving a total of 14 countries, with some potential third-party contributors.

  • 5 Eyes — USA, UK, Canada, Australia, and New Zealand
  • 9 Eyes — 5 Eyes Countries plus Denmark, France, The Netherlands, and Norway
  • 14 Eyes — 9 Eyes Countries plus Germany, Belgium, Italy, Sweden, and Spain
  • Potential 3rd-Party Contributors — Israel, Japan, Singapore, and South Korea

You should also be careful not to connect to servers in countries with potentially hostile governments. For instance, if you’re reporting on a sensitive issue in Russia, you could be targeted by Russian intelligence. If you connect to a server that is physically in Russia, it is possible that the local authorities could be monitoring this server — and hence your online activities.

It’s especially important for journalists to get a VPN with a no-log policy. These providers will not store any information about your online activity. Highly recommended no-log VPNs for journalists are NordVPN, ExpressVPN, and Surfshark.

Don’t trust free VPNs! Free VPNs are not trustworthy enough for secure journalistic needs. To make a profit, free VPNs are even known to sell user data and give network access to malicious third parties. Make sure you thoroughly research any free services you use before entrusting them with your own valuable data.

For an in-depth look at how VPNs work, check out our complete beginner’s guide to VPNs.

#7 Use a Zero-Knowledge Cloud Provider

You can use cloud services to send large files to your colleagues or sources, especially if they’re geographically far away.
However, make sure you use a zero-knowledge cloud provider — this ensures your files will be encrypted before they’re uploaded. A unique password will than be issued to your recipient in order to decrypt the files.
Sync and pCloud are 2 zero-knowledge cloud services that provide secure end-to-end encryption, even with there free plans.

#8  Create Strong Passwords

It’s extremely important to use strong passwords, especially for your email accounts. If a hacker gets access to your primary email account, they could quickly use the same password to break into more of your online accounts.
However, contrary to popular belief, a password with a random combination of uppercase and lowercase letters, numbers, and symbols isn’t the strongest option. Neither is it the easiest to remember.

Here’s how you can create a strong password:

  • Make a long password — make sure it’s at least 11 characters long.
  • Use random words — you could flick randomly through a dictionary until you have 4+ words and link them together.
  • Ensure it’s easy to remember — choose several words that have no connection but that you are able to remember.
  • Use a unique password for each account — don’t repeat the same password twice.

People use weak passwords because they can’t remember more complex — and therefore stronger — passwords.
Luckily, you can use a password manager exist to store your login credentials. You’ll only have to remember one “master password” which has the ability to unlock the whole vault. I recommend using KeePass, LastPass, or Dashlane — all of these password managers secure and user-friendly, with free and paid plans available.

#9 Use Two-Factor Authentication (2FA)

If you’re storing sensitive material on one or more of your online accounts, 2FA is vital.
2FA requires two types of authentication before giving you access to your online account. You first enter your normal account login details, and than enter a randomized token send to your physical device.
The necessity for a physical device adds a strong extra layer of security. Even if someone gains access to your credentials, they will need this device to actually log in.

You can set up 2FA in 3 ways:

  1. On your phone. For Android users, you can use andOTP or Google Authenticator. For iOS users, you can use OTP Auth or Yubico Authenticator. Authy is a good choice if you use both operating systems.
  2. Via a physical device like YubiKey or Google Titan. It needs to be physically plugged in or tapped against your phone (if NFC is supported) for verification.
  3. Via SMS or email. This is the least-secure method, but you should still set it up if it’s the only type of 2FA on offer.

While app-based 2FA is the easiest and most widely-supported method, physical 2FA is the safer choice.

#10 Watch out for Basic Dangers to Journalists

These online threats simple to avoid but can cause major issues if you fall victim.

Phishing

Phishing is a cyber attack that tricks victims into believing they are being contacted by a trusted company or service.
Here’s an example: You’ve received an email that looks like it’s from Google, which asks you to verify something on there site via a link. When you click the link and log in to the site, a hacker steals your login credentials. Even if you’re using 2FA and enter your code on the phishing website, hackers can use this information to immediately log in to your real account.

This means everything you’ve stored on Google — including Gmail and Google Drive — can now be accessed by the hacker.

You can see an example of a phishing email below:

 

This phishing email looks like it came from FedEx, but it has some red flags such as the strange email address

While it looks semi-official, you can see that it isn’t addressed to anyone in particular, and the formatting and punctuation are strange. However, the main red flag is the email address, which is clearly not related to FedEx.
You still need to be vigilant about what you open and click when receiving emails. Cyber attackers can still appear to use a company’s domain to impersonate a company or one of it’s employees.

Malware

Be careful of accidentally infecting your device with malware when browsing the internet. Journalists should be especially concerned about ransomware, which encrypts your whole hard drive and requests a monetary ransom in exchange.
Make sure you use your common sense and avoid visiting potentially malicious websites. Don’t browse non-secure websites and avoid websites inundated with adverts, especially pop-ups. Bookmark sites you regularly return to so you know you’re visiting the real version.

You can use Malwarebyte’s anti-malware software to regularly scan your computer for threats. The free version is enough to detect malware, but the premium version offers useful extras, including ransomware protection and real-time protection.

 

Regularly scan your machine for malware — Malwarebyte’s free version is a good option

You should also be aware of “malvertising”, which quietly deliver malware via adverts without you even realizing it. Use an adblocker such as uBlock Origin on Chrome or Firefox to protect yourself against this threat.

Keep Your Devices Updated

Keeping your devices updated ensures you’ll be protected from the newest security threats. This advice applies to the operating system on your laptop or computer, mobile devices, and programs or apps you use.

Lock Your Devices and Set up Tracking

Make sure you have some form of lock on your mobile device and computer. Passwords are more secure than fingerprints or facial unlocking.
In case you ever need to recover a lost device, you can set up device tracking on Android and iOS devices. You can also activate the option to remotely wipe your devices, in the event that confidential information comes to be possessed by the wrong person or group.

 

Make sure you can always track your mobile device, so you can locate or remotely wipe it

11. Use an Alternative to Slack in the Newsroom

Instant messengers and collaborative tools have made there way into workplaces around the world, such as Slack or Skype for Business. As journalists, you may be sending sensitive data and files to your colleagues via these channels. You need to make sure they can’t be accessed by unauthorized third parties.

Slack is one of the most widely-used collaboration tools, thanks to it’s user-friendliness. However, Slack does not employ security features such as end-to-end encryption or self-destructing messages.

More secure options are:

  • Keybase Teams — Similar to Slack, but with end-to-end encryption and a self-destruct feature that completely removes messages from the conversation.
  • Riot — Built on the open-source Matrix protocol, Riot offers the best implementation of end-to-end encryption plus a user-friendly interface.
  • Wickr — Includes end-to-end encryption, auto-deletion of messages, and forensic deleting of content from your device.
  • Semaphor — A zero-knowledge messenger protected with end-to-end encryption and blockchain technology.

 

#12 Educate Your Sources and Colleagues

The average person doesn’t know how to share confidential information in a secure way — and this includes your sources. It’s important that you show your contacts how they can protect any confidential files and communicate privately with encrypted messages and emails. Even if they already know, it’s in both of your best interests to discuss a standard operating procedure (SOP) before cooperating.
Explain to them that under no circumstances should they deviate from your SOP. It’s best to meet in person to avoid leaving any digital footprints — even if it’s just to set up there online safeguards.
Remember, even if your own online security is close to perfect, you may be vulnerable if your data leaks through other people. Your online security is only as strong as those in your circle of communication.

Lazy Security Costs Journalists — Protect Yourself Now

Every journalist needs to protect themselves online with strong security tools — from hackers, state-sponsored operatives, and others who actively work to prevent damning stories from being published.
Here are the fastest and easiest ways to secure your digital presence immediately:

By taking your first steps towards staying private online, you can keep your stories and investigative pieces secure until they’re ready to be published.

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on individual and corporate level, I will be very glad to do that I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained includes staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria among others. Please come on Whatsapp and let’s talk about your training. You can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training.

I know you might agree with some of the points that I have raised in this article. You might not agree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

 

 

Fact Check Policy

CRMNAIJA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

 

Become Part Of our Fan Base on Facebook. Click Here.
Follow Us on Twitter. Click Here.
Many Crypto. One place. Use Roqqu

Hi, I now use RavenBank to send, receive and save money. I also pay my bills with ease, you should try it out too

Fact Check Policy

 

Online Advertising: Facts To Know As A Publisher

 

In my previous article, I have talked about some of the facts that you need to know about Digital Marketing generally. In this article, I want to talk about some of the facts that you need to know about Online Advertising. Before I will do that, I would like to talk about some of the terminologies that you would like to come across when you are advertising online.

 

#1 Advertiser

This is an individual that has a product to sell and he is looking for means or individuals that can create such awareness on his behalf. A good example is Nestle or Nike using online mediums to advertise their products and services. 

 

#2 Publisher

This is an individual that has content and he is looking for ways of monetising their content. They can allow adverts to be placed within their content in order to make money. A good example is a Blogger using Adsense to monetise his blog. 

 

#3 Ad-network 

The advertising network has the advertising infrastructure that allows both advertiser and publisher to meet so that the advertisers that need publicity and the publisher that needs money can come together in order to create awareness about products and services for the target audience. 

 

#4 Consumer

He/ she is the bride looking for products and services that will satisfy his or her needs.

What then is Online Advertising?

Online Advertising is an effective form of advertising compared to other forms of advertising. The reason is that more than a billion people have access to the internet across the globe. online Advertising is also good because it is not passive. Customers can interact with it. It allows them to take action immediately.

 

It also includes email advertising and search engine marketing. mobile advertising, and social media marketing among others. 

 

Here are some of the benefits…

#1 Attract Visitors 

One of the major benefits of this form of advertising for users is that it will attract visitors to your website. If you are a blogger pr content creator expecting reasonable traffic on your website and you are relying on Search Engine traffic, it might not come on time.

 

If you are ready to place banner ads on popular blogs or bid for keywords in order to drive traffic to your website, you will be amazed at the level of traffic that you will have on your website. Online advertising gives you tons of traffic with immediate results. 

 

#2 Convert Visitors 

One of the reasons for running online campaigns is to turn visitors into customers. If you are running campaigns just to create awareness, it might not bring some immediate results.

 

Like I said in one of my articles, you cannot always get a 100 percent conversion rate for your campaign but you have to make sure that your campaign is captivating enough. It must consist of all the information that your prospects need in order to make buying decisions. 

 

#3 Retain and Grow customers 

Online advertising can also assist organisations in retaining and growing customers. There are some online customers that will buy your products and they may not necessarily come back for repeat purchases. You can actually run a campaign that will be targeted at this type of customer.

 

You can offer them mouth-watering offers that will make them engage in a repeat purchase. Running this type of campaign can also increase brand awareness among your target audience as well. 

 

 

Action Point
PS: If you would like to have an online course on any of the courses that you found on this blog, I will be glad to do that on an individual and corporate level, I will be thrilled to do that because I have trained several individuals and groups and they are doing well in their various fields of endeavour. Some of those that I have trained include staffs of Dangote Refinery, FCMB, Zenith Bank, New Horizons Nigeria, and Phillips Consulting among others. Please come on Whatsapp and let’s talk about your trainingYou can reach me on Whatsapp HERE. Please note that I will be using Microsoft Team to facilitate the training.

I know you might agree with some of the points that I have raised in this article. You might disagree with some of the issues raised. Let me know your views about the topic discussed. We will appreciate it if you can drop your comment. Thanks in anticipation.

 

Fact Check Policy

CRMNIGERIA is committed to fact-checking in a fair, transparent and non-partisan manner. Therefore, if you’ve found an error in any of our reports, be it factual, editorial, or an outdated post, please contact us to tell us about it.

Claim your 1 USDT for reading this post. 

 

Free BNB up for Grab. Click here to claim yours. 

       
Fact Check Policy