Top Ten Cyber Security Best Practices

Contact Us

Introduction

As governments across the world set out to develop comprehensive data standards, navigating complex compliance frameworks can become increasingly cumbersome. For organizations within regulated industries, correctly handling sensitive data is crucial. While compliance may seem simple, the process by which an organization goes about this is often misunderstood and futile. 

As companies look to the cloud for convenience, affordability, and scalability, bad actors have developed new vectors for attacking their infrastructure in a bid to steal valuable user and customer data. As organizations grow in size, the number of challenges associated with scaling grows too. By following some best practices you can ensure the safety of your valuable data and intellectual property, and maintain operational efficacy. In this article, we will detail the top 10 cybersecurity best practices.

Security Influenced Development

Integrate security into every aspect of your development cycle. Everyone involved should be trained in the best security practices of their discipline and should carry out their work accordingly. More specifically, think about how to incorporate security checks into user stories, acceptance criteria, development patterns, unit and integration tests, code reviews, CICD processes and user acceptance tests. Following secure programming practices such as correct user input validation, can alone shut down a large number of common attack vectors (SQL injection, Cross Site Scripting and Buffer Overflow attacks just to name a few).

Full service system and process evaluation, and general structure assessment can aid in security vulnerability identification, mitigation, and ultimately prevention. Lastly, implementing continuous learning as a part of the process is an often overlooked component of security best practices. 

Access Control and Password Policies

Root access should not be granted to any of your cloud environments and steps should be taken to secure all other access. If possible you should provide temporary credentials, services like the secure token service provided by AWS allow scoped credentials to be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them.

If using an application or service where short term credentials can not be configured then you should require that users provide a strong password by requiring a certain level of complexity. Passwords should be configured to expire after a set amount of time and multi factor authentication should be implemented. Where a password is referred to as something a user knows, a second factor should require something a user is (such as a biometric) or something a user has (such as a one time token), even if the password is compromised, the second factor should block unauthorized access. Therefore requiring MFA, a password policy, and adding period access review are all important components to consider.

Principles of Least Privilege

Any users or roles defined on your platform should only have the minimum amount of access required to complete their task, and should only have that access for the amount of time it takes them to complete that task. Automate privilege checks to alert your security team of any suspicious or unused roles, and schedule regular audits of these systems to ensure the logic and the scope of your automated processes are correct. Designing your systems using infrastructure as code means all of your resources have well defined and consistent privileges across environments.

Infrastructure As Code

Manually building and maintaining a well designed infrastructure is a complex, expensive and time consuming task. In terms of security, one of the biggest concerns with the manual process is consistency. Humans are fallible, as the number of development and production environments grows, the probability of misconfiguring one of those environments grows too. Defining your entire architecture as configuration files that can be shared and version controlled allows you to quickly deploy consistent infrastructure into any new environment. This also means that developers can be fully confident that features or patches they have built and tested in a staging environment will work the exact same way in production. Similarly if a new feature does happen to introduce a bug, infrastructure as code provides us with an easy mechanism to roll back our production environment to a stable state.

Continuous Monitoring

As the number of resources you manage grows, it becomes harder to ensure the performance and safety of your solution. At some point the volume of metrics and log data requiring analysis to ensure critical services are fully operational and protected becomes unmanageable. To maintain visibility over the entire system, automation is key. Often a combination of automation tools are required. Cloud environments and infrastructure as code allows us to easily experiment with different monitoring solution configurations.

Knowing which logs to monitor is one crucial aspect of implementing an efficient monitoring solution. Firewall logs, application logs, database logs, operating system logs and many others all play a part in describing the health of your infrastructure.

Protect Your Data At Rest

Identify confidential data and limit who can access it. Use strategies such as role based access control to manage access levels, use encryption services to protect the data and use hashing algorithms and/or digital signatures to verify data integrity. Even with all these measures in place you should still plan for things like accidental deletion or even natural disasters. Implement multi factor authentication and use resource versioning to prevent accidental modification or deletion. Regular multi region backups and data replication can protect your data from worst case scenarios such as natural disasters.

Protect Your data In Transit

Your encryption requirements should be based on industry best practices and require secure protocols for all applicable services in order to meet any organization and legal requirements. For example, you should enable HTTPS for load balancers and use SSL/TLS for database connections. Using a virtual private cloud and some strategic placements of services can ensure that certain sensitive traffic never travels over the public internet, for example you could keep your application server in a public subnet reachable from outside, but place your database server in a private subnet, only accessible on specific ports from other servers inside the vpc.

If there is a requirement to provide external connectivity use an IPsec VPN for securing point-to-point or network-to-network connections. This provides both data privacy and integrity.

Finally use monitoring tools to detect suspicious or unusual data movement. Tools like AWS GuardDuty can detect activity such as data being transferred via an unusual protocol, to an untrusted network, in unusually large volumes or at unusual times.

Keep Your Systems Up To Date

Regardless of where they are physically located, cloud or on premise, it’s just as important to keep your servers up to date. Working with outdated operating systems and software can expose you to a range of security vulnerabilities. Managing a large number of resources can be a daunting task but there are a number of steps you can take to tackle this issue.

Developing and maintaining a small number of custom operating system images allows you to create a manageable catalog of hardened, stable, baseline images. Using out of the box and/or custom tooling to monitor these systems in production can help highlight any weaknesses or vulnerabilities that are introduced, or alert you when updates are available. Finally patching these systems should be automated. Cloud providers like AWS provide many managed services to assist with these steps. For example, AWS Inspector and AWS Systems Manager can be used to detect and patch weaknesses, respectively.

Endpoint Security

Using the services or applications of a particular cloud provider doesn’t disregard the need for using robust endpoint security. Endpoint protection means securing end-user devices, such as laptops, desktops, and mobile devices. Companies need to protect endpoints to their corporate networks and for devices used to access their cloud accounts. This is because they serve as access points to all cloud processes, and malicious actors can exploit them at any time. Enhancing endpoint security allows a company to prevent risky activities that can provide entry points. Besides, enforcing endpoint protection and compliance as per existing data security regulations enables a business to maintain stronger control.

Notwithstanding, endpoint protection affects cloud security due to the growing access points to a cloud. Increasingly, organizations improve their operations by incorporating practices for accessing data more fluidly. For example, they implement BYOD (Bring Your Own Device) policies where employees can use their personal devices to access and modify cloud data. The devices require adequate endpoint security such that they can’t provide hackers with easy targets for stealing or corrupting data. Such include using VPNs when accessing cloud accounts via a public Wi-Fi network.

Making It All Work Together 

The tips listed so far all offer direction on tackling specific problems, but the most important tip  is to ensure that you have the correct infrastructure, the right processes and the right team to bring all these solutions together and make sure everything works in harmony. The team crafting the policies should work with the other teams to get the policies right in as few iterations as possible. Adelear offers full service system evaluation and assessment to not only address security vulnerabilities but also compliance violations, or processes that may lead to future violation.