Encrypt laptops to avoid HIPAA violations

Have you encrypted the hard drive or solid-state drive (SSD) on every laptop in your organization? If your business or nonprofit organization handles Protected Health Information (PHI), you should encrypt every laptop as soon as possible! Laptops that store PHI may be one of the greatest vulnerabilities in any organization. Just one stolen laptop can cost millions of dollars in legal fees and fines. Last week, two major major settlements were announced that could have been prevented with proper laptop encryption. North Memorial Health Care of Minnesota agreed to pay $1.55 million, and Feinstein Institute for Medical Research agreed to pay $3.9 million to settle charges that they potentially violated the HIPAA Privacy and Security Rules. I will summarize both cases and explain the simple steps that you can take to protect your organization.
Read More

Write-only bucket policy example for Amazon S3

Amazon S3 is widely used as a repository for backups. Backups are an important aspect of a resilient system, but they are also a potential security vulnerability. Unauthorized access to a database backup is still a PCI or HIPAA violation. The permissions on Amazon S3 should be configured to minimize access to critical backups. My strategy is to use IAM to create a backup user with no password (cannot log in to AWS) and a single access key. The backup user’s access key is used to put backup files into S3. The S3 bucket policy is configured to allow “write-only” access for the backup user. The backups cannot be obtained, even if the backup user’s credentials are compromised.

It is fairly difficult to figure out how to create a “write only” bucket policy. The policy shown below is the “write only” policy that I use. It consists of two statements: BucketPermissions gives a user ability to locate the bucket (necessary to do anything in the bucket) and list its contents (to verify that a backup was written). You may remove the s3:ListBucket action if true write-only access is desired. The statement called ObjectPermissions allows the user to create objects in the specified bucket.

{
 "Version": "2012-10-17",
 "Id": "YOUR_POLICY_ID_NUMBER",
 "Statement": [
 {
 "Sid": "BucketPermissions",
 "Effect": "Allow",
 "Principal": {
 "AWS": "arn:aws:iam::YOUR_ACCOUNT_ID:user/USERNAME"
 },
 "Action": [
 "s3:ListBucket",
 "s3:GetBucketLocation"
 ],
 "Resource": "arn:aws:s3:::BUCKET_NAME"
 },
 {
 "Sid": "ObjectPermissions",
 "Effect": "Allow",
 "Principal": {
 "AWS": "arn:aws:iam::YOUR_ACCOUNT_ID:user/USERNAME"
 },
 "Action": [
 "s3:PutObjectAcl",
 "s3:PutObject"
 ],
 "Resource": "arn:aws:s3:::BUCKET_NAME/*"
 }
 ]
}

S3 does one odd thing: this policy allows the user to verify that a particular object exists in S3, even if they don’t have permission to GET the object. For example, running the command:

s3cmd get s3://BUCKET_NAME/PATH/BACKUP_FILE_NAME.tgz

Produces the output:

s3://BUCKET_NAME/PATH/BACKUP_FILE_NAME.tgz -> ./BACKUP_FILE_NAME.tgz
ERROR: S3 error: Unknown error

It appears that the file is being downloaded, but in fact only an empty file is created! While it is generally a bad idea to allow unauthorized users to guess file names, it is not a real problem in this case, because the backup user’s credentials would have to be compromised even to confirm the existence of a file stored in S3.

References

  1. AWS Bucket Policy Reference
  2. S3 Encryption (high recommended)

The Illustrated Guide to SSH Port Forwarding

SSH is a powerful tool for accessing remote systems. This guide will illustrate one of the more confusing and poorly documented capabilities of the ssh command on Linux: port forwarding. Port forwarding is a way to “tunnel” any TCP protocol through a secure, encrypted SSH connection. It can also be used to make network connections transparent to the applications that are using them. The diagram below shows a user with an application running on a local machine (Client), such as a laptop. The app needs to interact with a server hosted on a remote host (Protected) which is isolated behind a login node (Login). This situation may occur when a user wants to run a management or admin GUI for a database such as MySQL or MongoDB. In a production environment, a database server is never exposed directly to the internet. Database connections on a private network are often unencrypted to maximize speed. SSH port forwarding can be used to connect the GUI to a database on a remote server. Forwarding is also used for running visualization applications on a GPU node that is located behind the login node on a high-performance computing cluster.

SSH Port Forwarding

Read More

Knowing about Open Compute can help you make better decisions about your infrastructure

The Open Compute Project (OCP) is valuable to enterprise IT professionals because it embodies the best practices of companies that operate hyperscale computing systems. It’s not often that a business is willing to share information about the practices that are key to their competitiveness. Economical, efficient, and scalable infrastructure is crucial to the success of companies such as Google, Amazon, and Facebook. By studying the Open Compute Project, you can learn about the best practices of computing at hyperscale, and determine which practices can be applied to improve your IT operations.

For years, hyperscale operators have been working directly with original device manufacturers (ODMs) to design and produce hardware that meets their unique needs. In 2012, Google claimed to be one of the largest hardware makers in the world, and had probably been in the server hardware business for years.  In 2011, Facebook started the Open Compute Project in an effort to standardize the design of servers and infrastructure for a hyperscale environment. The OCP releases open-source hardware specifications that can be implemented by any ODM. Key design goals include minimizing initial cost and power consumption, and maximizing interoperability and standardization. The hardware is designed to be “vanity-free,” meaning that it does not incorporate any features that are specific to a particular manufacturer. These design goals have led to some interesting departures from industry conventions.

Image of Open Rack

Open Rack

OCP servers are primarily intended to fit into the Open Rack (although 19” servers with OCP-compliant motherboards are now available). This rack has the same floor footprint as a standard 19” rack, but it is very different internally. The rack height is measured in “OpenU.” One OpenU is 48mm, while 1U in a 19” rack is 44.5mm. Three high-current, 12V DC power buses run down the back of the Open Rack. The rack is divided into three “power zones,” each with ten OpenU for system shelves. Each power zone has a 3-OpenU “power shelf” that supplies 4200W of power to the DC power busses in that zone. 2 OpenU at the top of each rack are reserved for a network switch.

OCP triplet server image

OCP triplet server

A typical OCP server is housed in a deep, narrow “system tray” that contains a motherboard with two CPUs, one hard drive, and fans. The rear of the tray has a power plug that fits into one of the DC power buses in the Open Rack. Three trays can fit side-by-side on a shelf that occupies one OpenU. Alternatively, OCP-compliant servers are available with four nodes in a 2-OpenU unit. OCP servers are capable of operating in an environment with a higher ambient temperature and higher humidity than a typical data center. This capability reduces the cooling requirements for the data center, increasing its energy efficiency and reducing operating costs. Facebook has also released open-source design specifications for its data centers through the Open Compute Project.

The Open Compute Project provides a rare inside look at a set of best practices for hyperscale computing. I encourage you to follow the links to the various OCP specifications, which are concise and easy to read. Of course, most of us don’t have the opportunity to build a hyperscale computing infrastructure from scratch. In future articles, I’ll go into more detail about specific aspects of the project and explain how specific best practices from the OCP can be applied in a typical enterprise setting.