[metaslider id=2951] … Read More
It seems as if every week there is a new top data security issue for healthcare organizations to remain vigilant on. If nothing else, it further underlines why a well-rounded approach to data security is essential, and covered entities must ensure their administrative, technical, and physical safeguards are all current.
A recent report from a law firm shows why employee training and education programs are critical for all industries, including healthcare. Human error was the number one cause of data security issues,according to Baker Hostetler. The firm reviewed cases it had worked on in the last year that related to privacy and data protection, and found that employee negligence was responsible for 37 percent of reported issues. – More ->
Interesting article, however when it comes to Human Error, I believe the issue is really human nature. We can educate our employees to not click on “phishing attempts”, the problem is that some are written so well that even the most educated person will open the email. In these instances the only remediation is to incorporate security products that will detect and remediate any malware or threats to the corporate network, block any communication with a command and control as well as monitor your network for unusual behaviour.
Contact us with any questions on how we can assist.
Mark Burnette, For The Tennessean 11:11 p.m. CDT April 29, 2015
Today’s companies face a truly daunting task when trying to protect their computer systems and sensitive data from compromise. Attackers are better coordinated and more sophisticated than ever before, and their tools are easier to obtain and use.
While there are many security issues for businesses to be concerned about (some of which are covered in other installments of this series), an all-too-common problem at companies of all sizes is attacks directed at the computer users themselves. The vulnerable users are workers in the company who have user accounts and passwords and use desktops, laptops, tablets and other devices to interact with a company’s data and network. Hackers and other bad guys target these users because they have access to sensitive data and systems, their account passwords are typically easy to guess or crack, and they are often willing to open a malicious file, click on an emailed link or even willingly type their password into a bogus site.
Protecting your company against end-user attacks requires a two-pronged approach: 1) train your users to help them be more aware of how end-user security attacks occur and 2) configure your systems to make it harder for the bad guys to successfully get in if a user slips up. Here’s a list of steps you should take:
•Keep up to date with security patches provided by software vendors for end-user machines. In addition to operating system patches, be sure to patch application software such as Adobe, Java and web browsers, as older versions of those tools have well-known vulnerabilities that are frequent vectors of attack.
•Provide spam filtering for every machine, with sensitivity controls turned up. One of the most common tactics attackers use to make initial entry into a company’s network is enticing end users to click on a spam email link that installs malware. While this won’t stop every phishing attempt, if you can filter out even one, that is one fewer opportunity for an unsuspecting user to click a bad link.
•Remove local administrator rights from end-user machines. Local administrator rights give a user more power to make changes to a computer, and if an attacker gains control of a machine with those rights, damage to the network can be much more significant.
•Make sure there is up-to-date anti-virus/malware protection installed on every machine.
•Require IT personnel to use different passwords when they work on servers. Even IT administrators can fall victim to email phishing attacks when they are working on their own computer. If they click on a bad link while logged in as an administrator, attackers can gain big-time access to your network using their privileged credentials.
•Develop a security awareness program for all personnel to help them understand their responsibilities when using a company computer system and/or handling sensitive data. This training should also teach users how to create good passwords (ones that are easy to remember, but difficult to guess).
•And perhaps most importantly, require “two-factor authentication” for users logging on to the network from a remote location. That means that a password alone is not enough to gain access; another form of authentication is needed. That could take the form of such things as a fingerprint, a token (a physical device that generates a code that is entered on the machine) or a digital certificate. If two-factor authentication is in place, an attacker who successfully captures a user’s access credentials still won’t be able to remotely connect to the network without the second factor (the token).
Taking all these measures will not completely eliminate the possibility of a successful attack, but it will greatly reduce your exposure to this common attack path, which just might make a potential attacker move on to a more vulnerable target.
Mark Burnette is a partner in the Security and Risk Services practice at LBMC, the largest regional accounting and financial services family of companies based in Tennessee, with offices in Brentwood, Chattanooga and Knoxville.
Accounting for 33 percent of identified exploit samples in 2014 is CVE-2010-2568, a popular Microsoft Windows vulnerability that was used as one of the infection vectors for Stuxnet, Jewel Timpe, senior manager of threat research at HP Security Research, told SCMagazine.com on Monday.
The report shows that CVE-2010-0188, a vulnerability in Adobe Reader and Acrobat, accounted for 11 percent of exploit samples in 2014. Six Oracle Java bugs identified in 2012 and 2013 also made the top ten list, as well as two Microsoft Office flaws – one identified in 2009 and the other in 2012.
“Our biggest message here is that we have got to start learning from our past,” Timpe said, going on to add, “We know software has vulnerabilities and vendors patch them, and when those patches are made available, they need to be applied. The best patch in the world won’t help your software if you don’t apply it.”
Timpe admitted that patching everything is not easy.
Patch management is a challenge for organizations because it is expensive and resource intensive, she said, adding that launching new applications may negatively affect existing infrastructure and could even result in regression in other software – meaning previously patched vulnerabilities are possibly reintroduced.
Timpe suggested taking the stance of the “assumed breach,” and explained that organizations – big or small – should implement technologies that identify breaches quickly and shut incidents down. She added that companies should identify what assets are most valuable and assess how to protect it.
Another significant issue noted in the report is server misconfigurations.
“This year we saw the bulk of them are really misconfigurations that are allowing unnecessary access to files and directories that they should not be allowing access to,” Timpe said, going on to add, “These configurations are giving adversaries a new way to get in.”
According to the report, penetration testing coupled with internal and external analyses of configurations can help in identifying issues.
In 2015, Timpe said she expected to see more open source vulnerabilities, more SCADA attacks, and more of a focus on infrastructure. Additionally, she said that attackers will continue to have success by exploiting older bugs.
Timpe – who urged organizations to update if they are running older systems that have reached or are nearing end of support – said that cooperation and working together will help reduce the threat posed by attackers.
“If we talk more, share more, and gain a thorough understanding of imminent threats, it will continue to increase the cost the attacker has to spend to be successful,” Timpe said.
SolarWinds has added new features to its Network Performance Monitor (NPM) tool to help IT administrators better manage the increasing number of mobile devices connecting to the corporate network.
The ramp up in mobile adoption and the bring-your-own-device trend has added considerable complexity to the enterprise network. However many IT departments are still monitoring networks the way they did about a decade ago. It is not uncommon to see some IT outfit employing a collection of different solutions to keep track of different devices and conduct various monitoring tasks manually.
SolarWinds (NYSE: SWI) said its updated NPM now has wireless heat mapping that allows IT pros to maintain automatic, real-time maps of wireless network signal strengths. The tool also enables continuous wireless coverage and speeds up troubleshooting.
A new forecasting feature also automatically monitors critical network resources to help administrators predict future needs and prevent outages.
With NPM’s new wireless network heat maps, IT pros can automatically map their wireless networks to show signal strength according to their floor plans – whether in a small doctor’s office or a 40,000/sf campus – with a visual display of critical status and performance metrics,” said Chris LaPoint, vice-president, product management, SolarWinds.
With the heat maps, IT departments can now:
- Troubleshoot client connectivity issues, keeping mobile end-users working with minimal disruption to their productivity
- Generate user-sourced wireless signal strength surveys for coverage in all network locations, including remote sites
- Prioritize wireless signal strength where it is most needed and proactively make adjustments such as adding wireless access points,modifying the environment, etc.
- Use client location tracking to find any wireless-connected device within the network, helping IT keep track of end-users and rogue or misplaced devices
The new capacity forecasting capability automates planning for bandwith, wide area network, circuits and other network needs. IT departments can now:
- Use historical data from NPM on CPU, memory, volumes,connected wireless clients, node, and interface traffic utilization to provide automated assessments of average and peak use
- Answer the question, “How many days before I run out of disk space /CPU/bandwidth, etc. and it impacts a user’s network connectivity?”
- Set customizable alerts to proactively secure the necessary network resources to get ahead of those situations