Tuesday, September 13, 2016

New DDoS Downtime Calculator

Risk assessment is a critical part of any security strategy. Only by understanding the real risks associated with a given threat can you determine the most appropriate way to address them, as well as the right level of investment.
Incapsula’s new DDoS Downtime Calculator is designed to help you assess the risks associated with an attack, offering case-specific information adjusted to the realities of your organization.
The algorithm inside our DDoS Downtime Calculator is based on real-world information from a DDoS Impact Survey we conducted among 270 organizations representing various sizes and industries. Participants provided detailed information about the actual impact of DDoS attacks (e.g., frequency and length, overall costs, affected business unit).
Our subsequent data analysis uncovered factors that cause impact cost variances. These insights helped us estimate the probability that your organization will incur a DDoS assault.
Further, the results enabled us to design our DDoS Downtime Calculator to provide personalized risk assessments based on:
  • Company size
  • Industry
  • Type of hosting environment (e.g., cloud, VPS, own data center)
  • Most vulnerable operational area
  • Security measures in place
  • Current level of confidence in DDoS prevention capabilities

How It Works

To use Incapsula’s DDoS Download Calculator, simply visit the calculator page and answer six short questions about your organization and its existing security measures. (No registration is needed.)
Based on your input, using a series of linear regression models, the calculator then estimates:
  1. The probability that your organization will ever be hit by a DDoS attack
  2. The probability that your organization will be targeted in the next year
  3. The cost per hour of such an assault
  4. Expected annual cost of attacks if DDoS protection measures are not adopted
It’s our hope that the revealed data will help raise awareness of the threat posed by DDoS attacks, while also helping those organizations taking their first steps in formulating their DDoS mitigation strategy.

Monday, September 12, 2016

DNS security appliances in Azure

Malware and botnets (such as ZeroAccess, Conficker and Storm) need to be able to propagate and communicate. They use several communication techniques, including DNS, IRC and Peer-to-Peer networks. Normally the DNS protocol resolves human-friendly domain names into machine-friendly IP addresses. However, the fact that most organizations do not filter DNS queries means that it can be used as a covert communication channel. Data leaving a compromised system can be encoded in the DNS query and instructions can be sent back to the malware in the DNS responses without raising suspicions.
This article gives an overview of this threat and describes some ways of protecting your network from it.

How DNS works

DNS is a highly distributed system, i.e. no single server or organization has the answers to all DNS queries. The “.com” DNS servers know which Microsoft servers have DNS data for “microsoft.com” but they do not have the DNS records themselves. These authoritative DNS servers only store data for their own domains and traversal of a number of authoritative DNS servers may be required to find a particular DNS record.
When an application needs to lookup a DNS record, a query is sent to a local recursive resolver. This server navigates the hierarchy of authoritative DNS servers to find the required DNS record. This process is called recursive resolution and is usually handled by a fleet of resolvers within your infrastructure, or in this case, within the Azure infrastructure. The IP addresses of the recursive resolvers are either statically configured in the operating system (by the network admin) or dynamically configured through systems such as DHCP. Azure uses DHCP.

Most of the time, recursive resolvers do not filter the queries they are resolving. While your network administrator might not allow you to open a http connection to an outside resource, chances are the admin will allow arbitrary DNS requests to be resolved. Sure, what harm can a DNS query do?

DNS-based threats

The DNS system doesn’t just map domain names to IP addresses. There are a number of different DNS record types. For example, MX records are used to locate the mail servers for a domain and TXT records can store arbitrary text. Records such as TXT records make it easier for the malware to retrieve data, e.g. instructions or payload.
Bad actors can use DNS queries within their malware to contact command and control servers. The malware does a DNS query and then interprets the response as a set of instructions, such as “this target is interesting, install the key logger”. They can also use DNS queries to download malware updates and additional modules. To make them harder to block, malware often uses a domain generation algorithm (DGA) to generate a large number of new domains each day. To keep the communication channel open, attackers only need to register a small number of these domains, but, to block communications, law enforcement needs to block nearly all of them. This stacks the deck in favour of the bad guys.
Ok, so that’s inbound communication. What about exporting data from infected servers? The DNS protocol allows a domain name to be up to 253 characters long and queries for “.mydomain.com” will land on the DNS servers for “mydomain.com”. Data can be exported by crafting DNS queries on the infected host (e.g. “the_password_for_joeblogs_gmail_com_is_letMeIn.mydomain.com”) and using custom DNS server software to interpret the message on the other end. This method has been generalized into the TCP-over-DNS protocol (not to be confused with DNS-over-TCP), which tunnels TCP traffic through the DNS infrastructure.
It’s important to note that the malware needs to have infected the server before it can start using these communication channels. Therefore, filtering DNS is primarily a layered defense mechanism–a mitigation for when other techniques have failed to prevent the initial infection. In desktop environments, DNS filtering can also help prevent malicious links in emails or on websites from initiating the infection process.

Best practices

There is no substitute for good security and good security always uses a layered approach. The primary focus should be on preventing malware infections and propagation. An additional layer is to monitor and/or filter DNS traffic to detect and/or block communication of malware on infected machines. A balanced defense strategy should consider the following:
  • Keep servers patched and up to date.
  • Expose only the endpoints that are truly necessary.
  • Use Network Security Groups (network ACLs) to restrict communication to/from/within your network, e.g. block DNS traffic (port 53) to servers other than trusted recursive resolvers.
  • Use firewalls (DNS, application and IP) to detect and filter malicious traffic and see 3rd-party appliances available in the Azure Marketplace.
  • Separate critical and risky workloads (e.g. don’t surf the web from your database server).
  • Run anti-virus/anti-malware on your servers (e.g. Antimalware for Azure).
  • Run a smart DNS resolver (a DNS firewall) that scans DNS traffic for malware activity. See the Azure Marketplace for available 3rd-party DNS firewalls.
While the Azure infrastructure provides the core set of security features, Azure is also building a large ecosystem of 3rd-party security products. They’re available through the Azure Marketplace (e.g firewallwaf,antivirusDNS firewall) and can be deployed with just a few clicks. Many offer a free trial, after which they can be billed either directly through the supplier or hourly through your Azure subscription.
A growing trend is for enterprises to deploy DNS firewalls in their infrastructure and we’ve started adding 3rd-party DNS firewalls to the Azure Marketplace. These are special DNS servers that inspect DNS queries for signs of malware activity and alert and/or block the traffic. For example, a query to a command and control (C&C) server can be identified by either the domain being queried or the IP address of the DNS server. A DNS firewall is deployed as a DNS server within your virtual network and often uses a threat intelligence feed to keep up to date with the changing threat landscape. So your virtual network will look something like this:

Friday, September 9, 2016

Data loss prevention (DLP) solutions with document encryption

Organizations face the ongoing challenge of protecting their most sensitive information from being leaked. Two of the most popular solutions used to address this problem are Data Loss Prevention and Enterprise Rights Management. This datasheet explains how these technologies are highly complementary and advises how they can most effectively be used together to provide a complete data leakage solution. It also describes the integrations today between Oracle Information Rights Management and the DLP products from Symantec, McAfee, InfoWatch and Sophos.

Data Loss Prevention


Data Loss Prevention (DLP) technologies aim to prevent leaks of sensitive information. They do so by discovering sensitive information at rest, and monitoring and blocking sensitive information in motion, using content-aware scanning technology. The discovery, monitoring and blocking DLP components run either on the network (servers reaching out to scan repositories or intercepting network information flows) or on endpoints (end user computers or laptops). 

Information Rights Management


Information Rights Management (IRM) also aims to prevent leaks of sensitive information. It does so by encrypting and controlling access to sensitive documents (and emails) so that regardless of how many copies are made, or where they proliferate (email, web, backups, etc.), they remain persistently protected and tracked. Only authorised users can access IRM-encrypted documents, and authorised users can have their access revoked at any time (even to locally made copies). 

Complementary Solutions to Similar Problems


DLP and IRM address very similar problems, but in different and complementary ways:

  • DLP is well suited to situations where an organisation doesn't know where its sensitive information is being stored or sent. Content-aware DLP can map the proliferation of this sensitive information and direct remedial efforts, such as tightening existing access controls using blocking, quarantining or encrypting.
  • Out-of-the-box DLP remedial actions often prove to be disruptive to business workflows. Sensitive information is required for collaboration with certain third parties; configuring DLP to permit only the desired collaboration whilst preventing other data loss proves to be almost impossible.
  • Also DLP provides decisions about content at a point in time, e.g. can this user email this research document to a partner? However, 6 months later the organization may sever ties with the partner at which point the DLP rule may change; but this doesn't affect all the information that has flowed to this partner over the past 6 months. DLP cannot retroactively block access to information that it has previously been allowed to pass beyond its control to third parties.
  • Thus DLP customers are looking for a technology to allow secure collaboration triggered by their DLP solution.
  • IRM is well suited to situations where an organisation has relatively well defined business processes involving sensitive information, e.g. sharing intellectual property with partners, financial reporting, M&A, etc.. IRM-encrypting sensitive documents or emails ensures that all copies remain secured, regardless of their location.
  • IRM continues to work beyond the enterprise firewall or enterprise endpoints, so authorised end users on partner or home networks or endpoints can use IRM-encrypted documents without being able to make unencrypted copies. This access can be audited and revoked at any time, leaving previously authorised users with useless encrypted copies. IRM provides persistent protection, which means that you can revoke access to information at any time. One simple change in an IRM system can stop access to millions of documents shared with partners, customers or suppliers.
  • IRM protection requires any document to be encrypted. This can be manually actioned by an end user according to a corporate policy, but this reliance on a manual process may result in reduced uptake. To aid uptake and enforce policy many organizations automate the process via integrations with content management systems and enterprise applications. However many other sensitive documents are collaborated with that fall outside these perimeters.
  • Thus IRM customers are looking for a technology to detect sensitive data and trigger the IRM encryption process.

Integration Use Cases


From the above it should be clear that the combination of DLP and IRM will be more effective than either solution in isolation.

  1. DLP-discover and IRM-encrypt data at rest
    DLP is used to discover the proliferation of sensitive information (on endpoints and servers) and classify it in terms of its relative sensitivity. Sensitive classifications can then be IRM-encrypted to have persistent access rights in line with enterprise information security policy. For example DLP discovers a set of financial documents stored in a public file share and automatically protects them against an IRM classification that allows only the finance group to open the documents. The documents stay where they are, but IRM enforces the access controls.
  2. DLP-monitor and IRM-encrypt data in motion
    This time DLP monitoring is used to detect sensitive outbound information flows and to add IRM encryption as a remedial action for policy violations. For example a user attempts to email a sensitive document to a supplier, DLP detects this and uses IRM to protect the document but allows the email to continue onto its destination.
  3. DLP discovery of IRM-encrypted information at rest
    It is important that DLP scanners be enabled to scan IRM-encrypted documents and emails. This can be shallow scans (which verify the document is IRM-encrypted and check the IRM classification) to enable controlled sharing of suitably IRM-encrypted documents, or deep scanning (which temporarily decrypts the IRM-encrypted content) to verify that documents are encrypted to the correct IRM classification.
  4. DLP monitoring of IRM-encrypted information in motion
    Shallow scanning of IRM-encrypted documents could be used to ease potentially disruptive DLP blocking of sensitive outbound content. Certain IRM classifications could be allowed outbound while others could be blocked. Deep scanning could be used to add in content-aware policies and ensure consistency between DLP and IRM policies.

Integrating with DLP Vendors


Oracle has been requested by several customers and partners to integrate Oracle IRM with the leading DLP Vendors' solutions. Whilst all four of the above integration use cases are being scheduled on both Network and Endpoints, work has already been done today to support the following functionality.

Symantec DLP and Oracle IRM


Oracle and Symantec have collaborated to provide a solution that allows DLP to discover and automatically call IRM to encrypt data at rest. This results in sensitive documents being identified by DLP and then automatically encrypted with IRM. The encrypted files can then remain in their original location rather than being quarantined, but can only be opened by authorized users. The DLP product can also discover and monitor IRM-encrypted documents and then audit, quarantine or take no action depending on policy and context.

McAfee DLP and Oracle IRM


McAfee's Data Loss Prevention quickly delivers data security & actionable insight about the data at rest, in motion and in use across your organization. Protecting data requires comprehensive monitoring and controls from the USB drive to the firewall. The powerful combination of McAfee DLP and Oracle IRM automates the process of protecting your data, giving you confidence that policies are enforced consistently wherever your data needs to travel.

InfoWatch DLP and Oracle IRM


Oracle and InfoWatch have collaborated to provide a solution that controls information transferred via removable storage, optical media, web uploads and emails with attachments; as well as inspects contents of IRM-encrypted files and messages. The solution applies policies to prevent sensitive information leakage. A flexible policy can be configured to enforce IRM-encryption of sensitive emails. Digital fingerprinting of the IRM-encrypted content ensures that no parts or quotes of IRM-protected documents can leak outside the corporate network.

Sophos DLP and Oracle IRM


Oracle and Sophos have collaborated to provide a solution to control the transfer of IRM-encrypted information via removable storage, optical media, web uploads and email attachments. A policy can be configured to simply audit the transfer of IRM protected files or, if required, authorise the transfer of IRM protected files and block the transfer of non-IRM protected files.

Oracle IRM and Data Loss Prevention (DLP) technologies

 I spoke with a customer who is researching technologies which can help them secure sensitive documents and emails within their organization. I went to see them with our Information Rights Management product manager, Andy Peet, and IRM was of course the main topic of discussion. However they were also researching Data Loss Prevention (DLP) and wondered how the two technologies fitted together. So the following is an overview of DLP, its benefits and limitations, and its fit with Information Rights Management.
First some definitions:
  • Information Rights Management (IRM) refers to technologies that use encryption to persistently protect information contained in documents and emails from unauthorized access inside and outside the organization.
  • Data Loss Prevention (DLP) refers to technologies designed to detect and prevent the unauthorized transmission of information from the computer systems of an organization to outsiders.
The definitions sound quite similar, but under the hood the two technologies represent quite different approaches to two closely related problems.

DLP overview

DLP products are content- and context-aware filtering products that monitor outbound information flows from the network, servers and endpoints in order to detect and prevent the unauthorized transmission of information to outsiders. The core intellectual property in DLP is the natural language filtering used to classify information into categories such as PCI, PII, ITAR, GLBA, SOX, etc. Information categories can then be associated with policies, and policy violations logged and automatically remediated.

DLP systems are typically made up of the following components:

  • MONITOR – Passive network monitoring and reporting (“data-in-motion”), typically operating at the Internet gateway in an appliance form factor.
  • PREVENT – Active remediation by the network component. Remediation actions include alerting, warning, blocking, quarantining, encrypting, self-remediation, etc.
  • CAPTURE – Stores reconstructed network sessions for later analysis and rule tuning (only supported by a few DLP vendors).
  • DISCOVER – Discovers and classifies information (“data-at-rest”) in repositories and on endpoints.
  • ENDPOINT – DLP capabilities extended to desktop application-operating system interfaces such as local file systems, removable media, wireless, etc.

Benefits of DLP

The network monitoring and discovery components of DLP can be relatively easy to deploy, without IRM’s requirement for an endpoint agent. They do tend to immediately generate a bewildering number of policy violations so it is important that (a) the DLP reporting engine can be tuned to exclude most violations and focus on high-priority applications, e.g. PCI (b) the DLP classification engine not generate too many business-disruptive false positives (we are still far from Terminator-style artificial intelligences, fortunately ;).

The reports from DLP network monitoring and discovery provide a useful information security feedback loop: identifying compliance “hot spots” and poor working practices, mapping the proliferation of sensitive content throughout (and beyond) your enterprise and enabling organizations to tune their existing access control systems.

Limitations of DLP

With all the best will in the world DLP is only ever going to be a partial solution. There are simply too many information flows to monitor and too many violations to process. For all the claims of the vendors true natural language “understanding” remains a pipe dream, and some classification engines are little more than regular expression pattern matching. DLP cannot monitor encrypted information or information that leaves the corporate network to partners, customers or suppliers.

Most DLP customers would agree that moving from passive detection to active prevention is a massive leap. The shortcomings in the classification algorithms result in too many false positives (non-sensitive information mis-classified as being sensitive) and false negatives (where sensitive information is not classified as such), which combined with crude blocking techniques, such as cryptic network drops, wreak havoc on business productivity. Most of the real-world value of DLP is in monitoring and feedback, not active prevention. DLP tells you that you forgot to close stable door, which horses bolted and in what direction.
DLP classification filters are complex and in a global enterprise will require localization into all the languages in which data may be leaked. This makes maintaining and extending these filters difficult, slow and expensive.
DLP vendors have been forced to add endpoint components because of the numerous channels for data leaks from the endpoint, invisible to network DLP components. These components are for the most part very rudimentary, for example only scanning information sent to removable disks, but not to file shares, DVDs, printers, etc.
There can be widespread employee antipathy towards what is perceived as “big brother” monitoring or enterprise spyware, and some corporations may believe that in terms of policy violations “ignorance is bliss”, i.e. if they detect a million policy violations someone is going to expect them to fix a million policy violations, which is going to be expensive.

DLP and IRM compared

From the above discussion it should be seen that DLP and IRM address similar problems, but not the same problem.

DLP is more useful when an organization wants to protect itself from data leaks but doesn’t really know what information it needs to protect, or where that information resides. It can then use DLP network monitoring and discovery to map the proliferation of its sensitive information and use that map to improve its existing access control systems or apply new systems, such as IRM.
IRM is more useful when the enterprise already knows which information it needs to protect, and wants it secured and tracked both inside and outside the enterprise.
IRM’s value proposition is more towards providing higher assurance security for an enterprise’s most sensitive IP, for example trade secrets or draft financials. Once encrypted all copies of that information are secured and tracked, regardless of location or distribution mechanism.
DLP’s value proposition is more as a feedback/tuning mechanism for other more proactive access control mechanisms, than as an access control system in its own right. Having a means of observing the information actually flowing out of your existing applications and repositories is nevertheless tremendously useful.
IRM and DLP overlap in terms of cost of deployment. Network-based DLP monitoring and discovery are easier to deploy, since they do not require an endpoint agent, but have a huge blind spot in terms of endpoint activity. Introducing endpoint agents can make DLP more costly to deploy, since it now needs to manage gateway, server and endpoint agents compared to IRM’s endpoint-only agent.

Bottom line

The bottom line is that IRM and DLP are more complementary than competitive.  
Standalone they address similar but different problems. DLP and IRM vendors have long talked about integrating the two technologies, to provide a solution greater than the two parts. This would mean a DLP solution automatically applying IRM encryption to content discovered “at rest” or “in motion”, so that it remains secure and tracked “in use”, inside and outside the firewall. The link between DLP discovery and IRM is particularly attractive, since if content were IRM-encrypted at source then all subsequent copies would automatically remain secure “at rest”, “in motion” and “in use”, even on unmanaged systems.
Both technologies are highly extensible and offer comprehensive APIs, making their integration straightforward. I am not aware of many real-world integrations to date, but I’m sure this will change.

Solving the data loss prevention (DLP) puzzle and using IRM for encryption

An interesting strategy guide was published recently from InfoWorld. Titled "Strategies for endpoint security", it addresses concerns and challenges businesses have regarding the protection of endpoints, namely laptops and desktop computers.One section of the guide which caught my eye was "Five technologies that will help solve the DLP puzzle." The article discusses the following areas where "before embarking on a data loss prevention program, enterprises must first determine the essential technical ingredients.".

The first subject tackled is that of classifying information in the first place. DLPs most valuable functionality is the ability to monitor many points in the enterprise and detect the storage or movement of documents, emails and websites that contain sensitive or classified data. However one problem with DLP is how do you configure it to reflect a well designed and understood information classification policy? William Pfeifer states that "You cannot protect everything, Therefore methodology, technology, policy and training is involved in this stage to isolate the asset (or assets) that one is protecting and then making that asset the focus of the protection." Nick Selby, former research director for enterprise security at The 451 Group and CEO/co-founder of Cambridge Infosec Associates, then goes onto say the key is to develop a data classification system that has a fighting chance of working. To that end, lumping data into too few or too many buckets is a recipe for failure. "The magic number tends to be three or four buckets--public, internal use only, classified, and so on," he says.
So the recommendation is that DLP should be configured with a simple and easy to understand set of classifications. Keeping things simple in the complex world of security dramatically reduces chance of human error and increases usability. Oracle IRM is a technology that has had this message designed within its core from day one, it has a very powerful and yet simple to configure and deploy classification system. This is what makes the union of IRM and DLP such a compelling story when it comes to a comprehensive data loss prevention solution that can actually be deployed and used at an enterprise scale.

The second subject approached in the article is encryption. It's worth repeating the full statement here...
"This is a tricky one [encryption], as some security pros will tell you encryption does not equal DLP. And that's true to a point. As former Gartner analyst and Securosis founder Rich Mogull puts it, encryption is often sold as a DLP product, but it doesn't do the entire job by itself. Those polled don't disagree with that statement. But they do believe encryption is a necessary part of DLP. "The only thing [encryption doesn't cover] is taking screen shots and printing them out or smuggling them out on a thumb drive. Not sure I have a solution to that one."No worries Rich, Oracle and Symantec have exactly the solution you are looking for. DLP detects that a document or email contains sensitive information and IRM encrypts and secures it. IRM not only encrypts the content, but it can limit the ability to take screenshots, stop printing, manage who can edit the content, who can see formulae in Excel spreadsheets, even allow for users to search across hard disks and content systems for information inside encrypted documents to which they have legitimate access...

The article continues, "Stiennon says that while all encryption vendors are not DLP vendors, applying encryption is a critical component to DLP. "It could be as simple as enforcing a policy," he says. "When you see spreadsheets as attachments, encrypt them."Or more specifically, when you see any sensitive document or email, seal them with Oracle IRM! For more information on how IRM and DLP technologies can work together, have a read of this.

Tuesday, September 6, 2016

Qradar -- How is Offense Magnitude calculated?

Multiple properties will affect the magnitude, including

-number of events/flows associated to an offense
-number of log sources
-age of the offense over times per
-weight of the network object associated with offense
-severity/relevance/credibility of the event, and the categories of those events - login failures being weighed higher than firewall allows
-vulnerabilities/threat assessment of the host(s) involved in the offense, from asset data - ports, vulnerabilities, applications, etc

The process for calculating the sev/cred/relev of the offense is somewhat complicated. It's not actually based on the sev/cred/relev of the events at all.
It's based on the sev/crd/relev of the categories that are associated with the events.

11 things you can do to protect against ransomware – Explained Visually by Europol

new_infographic_-_police_ransomware_final

Creating Checkpoint VSX and Virtual System - Part 2 - VSX Deployment Example


This lab is an example for a typical VSX Deployment scenario - one shared external Interface to Internet and separate Internal interfaces for each VSX virtual firewall.

This lab also is second part of earlier post :
Creating Checkpoint VSX and Virtual System - Part 1 

In that previous post, it already shows how to create a new VSX gateway through SmartConsole. This lab will show the steps how to create two VSX virtual firewalls and how to set up a virtual router. Two internal interfaces will be used to test the traffics from two different networks. 

Topologies:


Basically, in this lab, there is one physical VSX gateway with two logical VSX virtual firewalls. Each virtual VSX virtual firewall has two interfaces , External and Internal.

Steps:


1. Follow the previous post "Creating Checkpoint VSX and Virtual System - Part 1" to add a new VSX gateway into Smart Dashboard.


The new VSX Gateway has four physical interfaces as shown in the following:
  • Eth0 Mgmt:192.168.2.41
  • Eth1 EXT: for 172.17.3.x External Network
  • Eth2 LAN1: for 192.168.99.x - VSX1 Internal Network
  • Eth3 LAN2: for 10.94.200.x - VSX2 Internal Network

After new VSX Gateway (CP-VSX) added into Smart Dashboard, the webUI in browser will show:
Web UI is not supported in VSX mode. Please use Clish for OS configuration.


 2.Adding vsx1 and vsx2 into CP-VSX






3.  Check Network Topologies on both vsx1 and vsx2


Creating Checkpoint VSX and Virtual System - Part 1


VPN-1/FireWall-1 Virtual System Extension (VSX) is a security and VPN solution, designed to meet the demands of large-scale environments. Centrally managed and incorporating key network resources internally, VSX allows businesses to offer comprehensive firewall and VPN functions to their customers, while reducing production costs and improving efficiency. Through a “virtualization” of network infrastructure, VSX allows administrators to use it to replace a collection of standard hardware devices. The VSX Gateway is comprised of a virtual topology that includes virtual devices that replace physical ones, such as routers, traditional firewalls, and even some network cables. 





Checkpoint Configuring VSX document shows how to create a new VSX system and how to create new virtual system, router and switch as well. This is the post to record the procedure how the lab has been done in my virtual environment. 

I am using VMware ESXi version 5.5.0 build 1623387 as the host to do lab for Checkpoint related products.In previous lab, a standalone security gateway R77.20 with management has been installed. This time I will add a new VSX gateway and a couple of virtual system, router and switch in.








Step1: create a new VSX gateway with VSX Gateway Wizard:

For how to install Checkpoint Gateway and management server, you should be able to find lots of videos in Youtube website, such as the following two:


After you have installed Checkpoint Gateway and management server on your VM system, you will need to log into Checkpoint Smart Dashboard to start VSX gateway wizard. 






 2. During wizard, you will have an option to add virtual system, virtual router or virtual switch in:

 

 3. VSX Gateway Properties:


 4. Now we can create Virtual System, Virtual Router or Virtual Switch