How you can bring open innovation and wait-less computing to your Linux environment. IBM POWER8 is the first processor designed for big data and supports the open source Linux environment with a mature ecosystem.
Network security is about finding and fixing vulnerabilities before they hurt you—in products, applications, connections, people, or processes. Improve the effectiveness of your security appliances and the efficiency of your infrastructure.
Sunday, November 20, 2016
With PowerAI, IBM Will Likely Accelerate Enterprise AI
IBM Watson is now an entire division of the company which indicates the importance they put on the future of AI.
Watson is only one part of IBM’s AI investment which I consider the “easy button” for those enterprise who don’t want to create everything from scratch. IBM also has DIY (do it yourself) infrastructure for cloud providers through POWER8, OpenPOWER, OpenCAPI, designed for cloud giant rolls their own AI software. But what about enterprises who are in the middle, those who want solid infrastructure and want to invest in the latest deep neural network frameworks? IBM announced an answer this week at SC16 and it’s called PowerAI.
IBM POWER8 and OpenPOWER for DIY public cloud AI
IBM intentionally designed the POWER8 architecture for future workloads like artificial intelligence, machine learning, and deep neural networks. In 2013, to expand the collaboration of hardware companies in the POWER domain, IBM established the OpenPOWER Foundation which allowed companies to closely collaborate on hardware and software to accelerate future workloads like AI and ML (machine learning). Part of the way that IBM has done that is with CAPI (Coherent Accelerator Processor Interface) and OpenCAPI interconnect for GPU and other forms of accelerators which dramatically improve specific workload performance. IBM’s POWER isn’t justfor AI and ML, as POWER does well in HPC and accelerated databases like Kinetica, but those AI workloads run really well on POWER architecture.
IBM Watson, the enterprise AI “easy button”
As I said, for enterprises, IBM’s Watson has gotten some traction in certain verticals like healthcare and the “turnkey” solution makes it easier to deploy new ‘smart’ capabilities. However, there are still enterprises that want to develop their own software and might not want to use exactly the software that IBM is offering with Watson. They may want to develop with a specific deep learning frameworks like CAFFE, TORCH, TensorFlow or Theano. These are big credit card companies and drug research companies with the internal research arms and the budgets to roll their own sophisticated neural-network-based software.
PowerAI for enterprise DIY AI
For those companies, IBM has a new offering called PowerAI. PowerAI is a software toolkit with deep learning frameworks and building block software designed to run on IBM’s highest performing server in its OpenPOWR LC line, the IBM Power S822LC for High Performance Computing, which features NVIDIA NVLink technology optimized for Power and NVIDIA’s latest GPU technology operating at 80 GB per second.
Monday, November 14, 2016
Successfully implement DLP
Define focus area
First and the most important step is to define your focus area. Depending on the size of the organization, DLP implementations could take up to six months to one year easily. You can't start deploying DLP in the whole organization at once. The approach should be to start with limited scope and then gradually expand to the whole organization.
Examples of focus are monitoring and protecting email communication between company's executives and board members or it could also be a certain department such as HR or Finance. After selecting your focus area held meetings with the team members of your focus area and gather information such as what types of important information (from their point of view) they hold and where it resides. Also what keywords are generally used in their documents. Keywords are important as they will be used in the later stage to TAG the documents.
Perform risk assessment
Risk assessment at this point will provide deeper understanding of the risks, costs, and potential sources of data loss. The more you know about the sources of data loss more accurately you would be able to translate this information into rules.
Classify your data
If you haven't already defined data classifications for your organization, following classification scheme can be used as it is or with some modifications according to your environment.
- Protected: Information is not shared with anyone.
- Restricted: Information is shared only with selected members of the organization.
- Confidential: Information is shared between selected departments or partners.
- Internal: Information can be shared within the whole organzation and to partners.
- Public: Information can be shared with anyone.
Once you have defined a classification scheme for your organization, the next step is to identify information flows. You need to know the source and destination of the information you want to protect when it is shared within the organization or to external partners or customers. At this point basically you are identifying the normal flow of the information. For example the payroll information should only be shared among selected members of the payroll department. This information should not be shared with any one else in the organization or outside it. Another example could be that financial audit reports should only be shared between members of the finance department and audit department or external auditor.
Tag/Label documents
After identifying the information flows, all documents should be labeled according to the classification scheme. This is the core on which your DLP rules work. There are two methods you can adopt in your organization as a combination or standalone. First one is to manually add tags in your documents. For example you can write protected, confidential, or internal keywords in the document headers or for Microsoft documents you can add the tag value in document properties. This approach is very accurate however the difficult part is that you have to enforce this practice in your organization, educate your users, and make sure every one follows this practice by conducting audits.
The second approach is based on keywords. In step one of the implementation plan, I mentioned that after defining your focus area you should held meetings with the team members and collect different keywords normally used in their. You can then use these keywords to the discover documents on the user machine and tag them accordingly. Normally all host DLP's have a discovery feature that you can use to discover and tag documents.
Define Policies
Clearly write all set of protection policies. This might be a legal or compliance requirement but more importantly clearly written policies will help your IT team to translate them into DLP rules easily and effectively.
Implement policies in DLP
In this step translate all protection policies into DLP rules. At first your rules should not block any communication rather they should be configured only to monitor the events. Based on these events you can fine tune your rules and then once you are sure that it will not interrupt the normal information flows you can set your rules to block any activity that should not be allowed.
Thursday, November 10, 2016
Data Leakage Protection Thoughts
Data Leakage Protection
Thoughts
I tried
to capture my concerns in the following two figures.
I usually approach security issues from the point of view of a security analyst, meaning someone who has operational responsibilities. I don't just deploy security infrastructure. I don't just keep the security infrastructure functioning. I am usually the person who has to do something with the output of the security infrastructure.
In this respect I can see the world in two states: 1) block/filter/deny or 2) inspect and log.
As a security analyst, B/F/D is generally fairly simple. Once a blocking decision is made, I don't care that much. Sure, you might want to know why someone tried to do something that ended up resulting in a B/F/D condition, but since the target is unaffected I don't really care.
Consider this diagram.
As a security analyst, inspect and log is much more complicated. Nothing is blocked, but I am told that a suspicious or malicious activity was permitted. Now I really need to know what someone successfully completed an act that resulted in a permitted yet inspected and logged condition, because the target could be negatively affected.
Consider this diagram.
Some might naively assume that the solution to this problem is just to forget inspection and logging and just block/filter/deny everything. Good luck trying that in an operational setting! How often do we hear about so-called "IPS" running in passive mode? How many fancy "DLP" products are running now in alert-only mode?
At the risk of discussing too many topics at once, let me also contribute this: is it just me, or are we security people continuously giving ground to the adversary? In other words:
I usually approach security issues from the point of view of a security analyst, meaning someone who has operational responsibilities. I don't just deploy security infrastructure. I don't just keep the security infrastructure functioning. I am usually the person who has to do something with the output of the security infrastructure.
In this respect I can see the world in two states: 1) block/filter/deny or 2) inspect and log.
As a security analyst, B/F/D is generally fairly simple. Once a blocking decision is made, I don't care that much. Sure, you might want to know why someone tried to do something that ended up resulting in a B/F/D condition, but since the target is unaffected I don't really care.
Consider this diagram.
As a security analyst, inspect and log is much more complicated. Nothing is blocked, but I am told that a suspicious or malicious activity was permitted. Now I really need to know what someone successfully completed an act that resulted in a permitted yet inspected and logged condition, because the target could be negatively affected.
Consider this diagram.
Some might naively assume that the solution to this problem is just to forget inspection and logging and just block/filter/deny everything. Good luck trying that in an operational setting! How often do we hear about so-called "IPS" running in passive mode? How many fancy "DLP" products are running now in alert-only mode?
At the risk of discussing too many topics at once, let me also contribute this: is it just me, or are we security people continuously giving ground to the adversary? In other words:
- Let's
stop them at our firewall.
- Well,
we have to let some traffic through. Our IPS will catch the bad guy.
- Shoot,
that didn't work. Ok, when the bad guy tries to steal our data the DLP
system will stop him.
- Darn,
DLP is for "stopping stupid." At least when the bad guy gets the
data back to his system our Digital Rights Management (DRM) will keep him
from reading it. (Right.)
I guess my thoughts on DLP can be distilled to the following.
- DLP
is "workable" (albeit of dubious value nevertheless) if you run
it solely in a B/F/D mode.
- As
soon as you put DLP is inspect and log mode, you need to hire an army of
analysts to make sense of the output.
- The
amount of asset understanding to run DLP in either mode is likely to be
incredibly large, unless you so narrowly scope it as to make me question
why you bought a new product to enforce such a policy.
- DLP
is not going to stop anyone who is not stupid.
Is anyone else hearing demand for DLP, and what are you saying?
Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.
10 comments:
Dr Anton Chuvakin said...
"DLP is not going to stop anyone who
is not stupid"
Maybe. So, let's define 'stupid' a bit.
1. Negligent employee?
2. Script kiddie?
3. A no-so-skilled attacker who is still beyond a SK?
Or?
Maybe. So, let's define 'stupid' a bit.
1. Negligent employee?
2. Script kiddie?
3. A no-so-skilled attacker who is still beyond a SK?
Or?
Gunnar said...
Was about to comment then I had a brief
spell of rofl when i read the "Let's stop them at our firewall."
comment.
To answer questions on hearing demand for "DLP", sure but I just chalk it up to InfoSec's endless Silver Bullet quest, where they reach for the stars and deliver next to nothing.
Here is a crazy idea - what if we put authentication, authorization, and auditing into our systems?
To answer questions on hearing demand for "DLP", sure but I just chalk it up to InfoSec's endless Silver Bullet quest, where they reach for the stars and deliver next to nothing.
Here is a crazy idea - what if we put authentication, authorization, and auditing into our systems?
Kevin Rowney said...
Yes, DLP does plenty of "stopping
stupid" and in many large scale deployments does so in what you call b/f/d
mode.
DLP has also busted numerous id-theft rings, corrupt employees, and hackers.
The threat surface is actually quite complex and not so simple as "stupid-employee" vs. "evil genius hacker".
@Gunnar: AAA is baseline protection, but field results from DLP deployments clearly indicate this is a low hurdle to clear. More detail on that here
DLP has also busted numerous id-theft rings, corrupt employees, and hackers.
The threat surface is actually quite complex and not so simple as "stupid-employee" vs. "evil genius hacker".
@Gunnar: AAA is baseline protection, but field results from DLP deployments clearly indicate this is a low hurdle to clear. More detail on that here
Alex Raitz said...
When I began analyzing network traffic via
NSM, management was extremely worried about the data leakage being detected,
especially data leaked from inside to outside the corporate network via IM,
FTP, etc.
When management directed me to investigate data leakage protection/prevention (B/F/D), it quickly became apparent that whatever risk mitigation these systems provided would be trumped by the business impact of administering the system. The anticipated increase in first-level help desk calls alone was nightmarish.
On the other hand, I did not find the "inspect and log" modes to be terribly verbose or challenging to handle. I agree that there was a fair amount of asset understanding involved in making data leakage detection systems useful, to the extent they are a poor candidate for outsourcing/MSSP, but I cannot imagine that an organization which can manage an NSM infrastructure would not be able to manage a DLP infrastructure.
In the end, I didn't find much that the DLP systems had to offer in "inspect and log" mode compared to SGUIL. Vigilance, some clever Snort rule writing, and user training on data leakage will go a long way to managing the "stupid" leakage.
When management directed me to investigate data leakage protection/prevention (B/F/D), it quickly became apparent that whatever risk mitigation these systems provided would be trumped by the business impact of administering the system. The anticipated increase in first-level help desk calls alone was nightmarish.
On the other hand, I did not find the "inspect and log" modes to be terribly verbose or challenging to handle. I agree that there was a fair amount of asset understanding involved in making data leakage detection systems useful, to the extent they are a poor candidate for outsourcing/MSSP, but I cannot imagine that an organization which can manage an NSM infrastructure would not be able to manage a DLP infrastructure.
In the end, I didn't find much that the DLP systems had to offer in "inspect and log" mode compared to SGUIL. Vigilance, some clever Snort rule writing, and user training on data leakage will go a long way to managing the "stupid" leakage.
Anonymous
said...
DLP is part of the problem, not part of the
solution.
It is an impossible exercise - if you have the capability to read data, you have the capability to copy it, some way or another. It is folly to pretend otherwise.
The best one can hope for with DLP is that it will help you tell a more convincing lie to an auditor, regulator or judge about "due diligence".
As a product category it's guaranteed to be an expensive, distracting failure. That money and productivity are flushed away on products that can fundamentally deliver nothing but a CYA story (the MORE it costs, the more it shows you care!) -- that the entire category of DLP has even the tiniest shred of credibility and legitimacy -- is a perfect example of the problems of fossilized corporate infosec.
Let us just pray that these jokers don't manage to ensconce themselves as a requird best practice in government, PCI or audit requirements.
It is an impossible exercise - if you have the capability to read data, you have the capability to copy it, some way or another. It is folly to pretend otherwise.
The best one can hope for with DLP is that it will help you tell a more convincing lie to an auditor, regulator or judge about "due diligence".
As a product category it's guaranteed to be an expensive, distracting failure. That money and productivity are flushed away on products that can fundamentally deliver nothing but a CYA story (the MORE it costs, the more it shows you care!) -- that the entire category of DLP has even the tiniest shred of credibility and legitimacy -- is a perfect example of the problems of fossilized corporate infosec.
Let us just pray that these jokers don't manage to ensconce themselves as a requird best practice in government, PCI or audit requirements.
b said...
LonerVamp said...
I think you and other commentors above hit
this one properly.
To me, "DLP" is nothing more than the continued marketing spew of antivirus->antispyware->antimalware->Hips->endpoint security->DLP... Basically the same product, bloated.
It does fine on the surface, but anything below that is weak or hard to analyze. Anyone with a proper security environment anyway won't need DLP. It's just the same old HIPS product with some endpoint security pieces tacked on, most of which should be done anyway by system management tools or LDAP-pushed policies.
To me, "DLP" is nothing more than the continued marketing spew of antivirus->antispyware->antimalware->Hips->endpoint security->DLP... Basically the same product, bloated.
It does fine on the surface, but anything below that is weak or hard to analyze. Anyone with a proper security environment anyway won't need DLP. It's just the same old HIPS product with some endpoint security pieces tacked on, most of which should be done anyway by system management tools or LDAP-pushed policies.
Tigercat
6795 said...
I agree with several commentors and
disagree with others so here it goes:
I am very much a supporter of DLP products as a level of protection for general users. I also agree that DLP isn't the answer to all your problems.
I see DLP tools as a technology solution to help drive awareness and behavior patterns. More a "front end" tool. Policies used in these tools can help reduce significant gap areas that network and log monitoring aren't going to cover all the way. Examples: CD burning, USB writing, Emailing files, and uploading classified data to websites. To me this can also be leveraged as a Security Awareness tool that helps when you can't do presentations 24 hours a day.
This being said I have reservations on network based DLP products and lean more toward the client side. I do not see this as a replacement for personal firewalls, antivirus, anti-malware tools. I would have serious doubts over any "single-client" product that would profess to cover all those areas.
My experiences with client side DLP tools is that it is very intrusive (+ and - on that),but once you get past the configuration hurdles (standard in a number of products) it has proven in several occasions to be very enlightening to those using it in terms of what was "thought" to be happening to what was "really" happening. Very helpful when explaining to a non-technical crowd as they see data flying out the door.
I think NSM has the potential to be a good complement to DLP, but will reserve commentary on it as a replacement until I see the end results of an ongoing implementation.
Fortunately I will have the opportunity to see both sides.
PS - I second Kevin Rowney's comment. "The threat surface is actually quite complex and not so simple as "stupid-employee" vs. "evil genius hacker".
I am very much a supporter of DLP products as a level of protection for general users. I also agree that DLP isn't the answer to all your problems.
I see DLP tools as a technology solution to help drive awareness and behavior patterns. More a "front end" tool. Policies used in these tools can help reduce significant gap areas that network and log monitoring aren't going to cover all the way. Examples: CD burning, USB writing, Emailing files, and uploading classified data to websites. To me this can also be leveraged as a Security Awareness tool that helps when you can't do presentations 24 hours a day.
This being said I have reservations on network based DLP products and lean more toward the client side. I do not see this as a replacement for personal firewalls, antivirus, anti-malware tools. I would have serious doubts over any "single-client" product that would profess to cover all those areas.
My experiences with client side DLP tools is that it is very intrusive (+ and - on that),but once you get past the configuration hurdles (standard in a number of products) it has proven in several occasions to be very enlightening to those using it in terms of what was "thought" to be happening to what was "really" happening. Very helpful when explaining to a non-technical crowd as they see data flying out the door.
I think NSM has the potential to be a good complement to DLP, but will reserve commentary on it as a replacement until I see the end results of an ongoing implementation.
Fortunately I will have the opportunity to see both sides.
PS - I second Kevin Rowney's comment. "The threat surface is actually quite complex and not so simple as "stupid-employee" vs. "evil genius hacker".
Dominic White said...
My response ended up too long, so I moved
it to my blog, you can find it at
http://singe.za.net/blog/archives/972-A-Response-to-Bejtlich-on-DLP.html
Joe Allesi said...
Yep, Data Leakage (Loss) Prevention is
really just another method of enforcing policy. And you are correct, it's just
a nice marketing term for extrusion detection, however I think DLP is best
managed by the owner of an edge service and/or by the data owner's themselves.
For example, if the policy is no SSN's will leave the company via SMTP or HTTP
unencrypted. The email gateway is then configured to force encryption on the
message before handing it to the remote domain, and an HTTP post of the SSN is
rejected by the proxy. There are lots of other options, but I really think this
level of data detail would be too much for an NSM team to support.
Data protection strategy
So now that you’re ready to look more closely at encryption in
your organization, where should you begin?
Every
organization is different, so there is no one-size-fits-all data protection
strategy. Before you can put your strategy into an actionable plan, you need to
answer the following four questions.
1. How does data flow into and out of
your organization?
Do
you receive emails with file attachments, or send them out? Do you
receive data on USB sticks or other removable media? How does your
organization store and share large amounts of data internally and externally?
Do you use cloud based storage services like Dropbox, Box, OneDrive, etc.?
What
about mobile devices and tablets? According to a Sophos survey, the average technology user carries
three devices. How do you rein in the wide range of devices that have access
to enterprise data?
You
should look for an encryption solution that is built to adapt to the way you
use data and how data flows within an organization.
Use case example: With more and more businesses using cloud storage, you need a
solution that secures cloud-based data sharing and provides you with custody of
your encryption keys.
2. How does your organization and your people make
use of data?
What
are your employees’ workflows, and how do they go about making their day-to-day
jobs more productive? What tools, devices or apps do they use and do any of
those present a possible vector for data loss?
You
need to understand how employees use third-party apps, and whether you should
prohibit what is often called “shadow IT,” if you can trust the security of
those systems, or bring development of these tools in house.
3. Who has access to your data?
This
topic can be both an ethical and regulatory discussion. In some situations,
users should not ethically have access to certain data (e.g., HR and payroll
data).
Worldwide,
there are some data
protection laws that stipulate only those who need data to
perform their tasks should have access to it; everyone else should be denied.
Do your employees have access to just the data they need to do their job, or do
they have access to data they do not need?
Use case example: IT administrators tend to have unlimited access to data and IT
infrastructure. Does the IT administrator need access to everyone’s HR data, or
access to the legal department’s documents about the latest court case? In a
public company, should people outside of the finance department have access to
the latest financial figures?
4. Where is your data?
Centralized
and mostly contained in a data center? Completely hosted in the cloud? Sitting
on employee laptops and mobile devices?
According
to a Tech
Pro Research survey, 74% of organizations are either allowing or planning to allow
their employees to bring their devices to their office for business use (BYOD).
Employees are carrying sensitive corporate data on their devices when they work
from home and on the road, increasing the risk of data leaks or compliance
breaches. Think how easy it would be to access confidential information about
your organization if an employee’s smartphone gets stolen or misplaced.
Challenges and solutions
According
to the 2015
Global Encryption & Key Management Trends Study by the Ponemon
Institute, IT managers identify the following as the biggest challenges to
planning and executing a data encryption strategy:
•
56% – discovering where sensitive data resides in the organization
• 34% – classifying which data to encrypt
• 15% – training users on how to use encryption
• 34% – classifying which data to encrypt
• 15% – training users on how to use encryption
Unfortunately,
there is no one-size-fits-all solution to these challenges. Your data
protection plan must be based on your business: the type of data your business
works with and generates, local/industry regulations, and the size of your
business.
Employees
need to understand how to comply with a clearly defined data protection plan
and how to use encryption. They must be clearly told which data they have
access to, how this data needs to be accessed and how they can protect this
data.
Most
importantly, you need to ensure that you can both offer and manage encryption
in such a way that it doesn’t impact the organization’s workflows.
To
learn how Sophos SafeGuard Encryption helps you address these challenges, check
out our blog post about things to consider when choosing the right encryption
solution. And
download our free whitepaper, Deciphering
the Code: A Simple Guide to Encryption.
Data Loss Prevention Strategies for Networks
Why Is It Necessary?
Besides the threat of cyber-assault, there’s the disturbing fact that most corporate data losses may be attributed to users within the corporate network – either through ignorance, lax security practices, human error, actual malicious intent, or other activities such as industrial espionage.
For organizations with a large user or customer base, it’s likely that much of the data being handled is of a sensitive personal or financial nature, and the loss or disclosure of personally identifiable information (PII), personal health information (PHI), proprietary data and intellectual property may have serious consequences – both for their finances and their reputation.
On top of this, there are often stringent legal statutes and regulatory compliance regimes governing data on corporate networks – the likes of Sarbanes-Oxley, the Payment Card Industry Data Security Standard (PCI DSS), HITECH or HIPAA – which lay down strict guidelines and penalties, in the event of its leakage or loss.
Network Data Loss Prevention Considerations
A corporate network is a complex system, with many potential points of data leakage. So a DLP strategy needs to take several factors into account. Circumstances will vary from organization to organization and within the operational and compliance regimes of different industries. But typical considerations include the following:
- Monitoring internal email, Web-based email, network traffic, applications, and social media – with restrictions and controls as needed.
- Inspecting and if necessary blocking traffic on email (internal and Web), HTTP and HTTPS transactions, FTP transfers, TCP/IP, and Web 2.0 applications.
- Inspecting the subject lines, body text, and attachments of outgoing messages, for potentially sensitive material.
- Setting and implementing policies for the monitoring and possible blocking of blogs, wikis, and other Web 2.0 applications.
- Setting risk levels for outgoing email and messages, with associated actions (e.g. block, alert, encrypt)
- Encrypting email messages for added security and in compliance with certain regulatory requirements.
- Notifying end users and network administrators when issues of non-compliance or violation of corporate policy arise.
Low-Level Monitoring
For a network, low-level monitoring provides a baseline defense level. It aims to ensure that all data leaving the network is inspected, and includes:
- Endpoint monitoring, by scanning desktop systems and laptops to keep an inventory of potentially sensitive data stored on them – with measures to secure or relocate this information.
- Monitoring and blocking any sensitive data transferred, transmitted, copied, or printed from laptops and desktop systems.
- Scanning the network as a whole, for sensitive information exposed on endpoint devices, file servers, websites, collaboration platforms, etc.
- Extension of scanning, inspection and protection to mobile devices on the network, via operating system compatible apps and APIs.
Data Loss Prevention Software
Data loss prevention software products (also known as data leak prevention, extrusion prevention, or information loss prevention products) are available from leading manufacturers as pre-packaged solutions, or as custom-made or customizable suites. All are essentially based on a monitoring and remediation policy, which consists of an exhaustive set of rules for how data moving within and out of a network should be treated.
Some (usually legacy) packages rely on the user to draw up a DLP policy for their own network – a process that may take weeks if done manually. An increasing number of more recent products come with ready-made template policies, which users may modify to suit their particular circumstances. Specialist modules may also be purchased and plugged into the system, in other cases.
For low-level or endpoint analysis and integration with BYOD (Bring Your Own Device) policies, compatibility with the major mobile operating systems is an essential feature. So too is support for the major email and Internet file transmission protocols.
Monitoring of social media platforms and e-commerce resources such as Salesforce is a feature of some leading systems, as are tools for monitoring hosted Web platforms such as Microsoft Online Services and Google Apps.
Existing Market Trends
Early adoption of DLP software solutions concentrated on endpoint analysis and network traffic monitoring via laptops, desktop systems, email, mobile devices, USB (flash) drives, and removable storage media. Heavyweights like Symantec and Intel-McAfee still dominate the global market, which stood at $670 million as of 2013, according to figures from Gartner, Inc.
DLP for The Cloud
For the future, the cloud seems set to be the major battleground for data loss prevention solutions and services. A Cloud Adoption & Risk Report from DLP provider Skyhigh suggests that the average organization now engages 759 cloud services, with this number increasing by more than 20% each quarter.
With companies running the risk of having their employees upload sensitive personal and corporate data to a still largely unregulated ecosystem of cloud servers, there’s a need for organizations to extend their existing Data Loss Prevention policies to the cloud – and a corresponding pressure on providers of DLP software solutions to integrate cloud protection into their tools.
End point Security with DLP
Three different levels of DLP solution
Data in Motion
Data which uses HTTP, FTP, IM, P2P and SMTP protocols are mirrored in the DLP server for inspection where visibility is enhanced
Data at Rest
Data in file servers, databases, hosts computers set for file sharing, etc.
Data at End Points
Data which sits on end user hosts (workstations and notebooks)
Technical Feature Considerations
Deep content analysis, monitoring and prevention
Identification and blocking capability
Centralized Management
Central policy setting, dashboard features
Broad content management across platforms and ease of Integration
Review of information infrastructure including software for requirement and compatibility issues
Automated remediation
Transfer confidential files, LDAP lookup, secure purging of sensitive data
Business Environment Considerations
Matching with Business Need
Matches defined business need over feature allure
Market Presence
Major presence in the market, financial industry experience
Staffing Needs
Staffing considerations to handle additional responsibilities
Email Security with DLP
End point Security with DLP
Nowadays DLP Solutions have couple of interesting technologies to provide endpoint Data Leakage prevention methods to assist the upcoming endpoint technologies such as BOYD (Bring Your Own Device).
The below setup is an example of how we can configure endpoint security to the mobile devices in your company network. But to function properly your excising proxy server should support the https decryption. Proxies such Squid, ISA don't support https decryption.
The below setup is an example of how we can configure endpoint security to the mobile devices in your company network. But to function properly your excising proxy server should support the https decryption. Proxies such Squid, ISA don't support https decryption.
Subscribe to:
Posts (Atom)