Get in touch with us: +65 6741 9822

Not replacing your ageing servers can result in considerable risk to your organisation’s security. Here, we discuss the essential considerations in making a qualified server replacements decision.

It’s unsurprising how replacing servers is often left at the bottom of an organisation’s list of priorities. When dealing with other business opportunities, along with limited human resources and budget, a still-running server might give the impression that things are still moving along efficiently.

But what it provides is a false sense of security. In a time rife with cyberattacks, obsolete servers — be it running or malfunctioning — are nondescript ticking time bombs for small and mid-sized enterprises (SMEs). A myriad of studies in 2018 and 2019 have pointed out that old servers have many vulnerabilities that can be easily exploited by hackers[1][2][3].

Additionally, now that support for Windows Server 2008 and 2008 R2 has ended, SMEs still using these servers will have to face expensive charges for continuous security updates. Even so, these extended updates have a life-cycle of only three years, and may not provide the protection needed to fend off security threats.

What are the implications of using old servers?

Not changing old servers is becoming highly impractical and risky in the long run. But should an organisation continue to use old servers, IT managers must be prepared to face the uphill battle against data breaches and downtime.

Implications vary with businesses, but there are some common issues that everyone may still face.

Firstly, IT operating costs will increase as old servers need more patching of operating systems and hypervisors to offset chipset vulnerabilities. These are tasks that are unavoidable additions to the operating workloads for IT staff.

Next, data breaches can be a pricey matter, and the cost of such violations will continue rising every year, according to a Ponemon Institute study[4]. Retargeting by cyber attackers also adds to the damage. With this in mind, old servers are especially susceptible to such attacks, such as ransomware. These servers are easier targets for hackers as they are more familiar with the machinery’s inner workings and flaws. As a result, cyber insurance for these servers may become costlier when accessing risks.

Most notably, all these can lead to significant downtime for organisations. It can be harder to identify hardware vulnerabilities and even harder to find remediation in the form of software patches for old servers. Recovery for these servers becomes a tiresome affair.

Extra layers of protection

The risks of ageing servers are undeniable. In response, many businesses are migrating to cloud servers as a more flexible option. But doing so has unique risks and challenges as well, such as security issues, running up higher costs than expected, and data recovery.

An ‘all on-premise’ or ‘all cloud’ deployments is impractical for SMEs. As such, hybrid IT, which combines the benefits of on-premise and cloud servers, is set to become more prominent in the workplace of the future.

Modernising IT is the wisest move to protect SMEs. Still, there are built-in features IT administrators should look out for in their new IT to use to keep their data secure:

  • Immutable Authenticity Assurance: Ensure that a server’s firmware is authentic by looking for the silicon root of trust, a fingerprint of the firmware burned into the server’s silicon at the time of manufacture. If the current firmware code matches the silicon fingerprint, authenticity is confirmed.
  • Authoritative Alerts: These fingerprints are permanent and designed to prevent false positives. So if there are any mismatches at boot or run-time, these alerts should be taken seriously.
  • Simple Recovery to ‘Trusted State’: As firmware checks occur during run-time, and the firmware fingerprint is permanent, recovering to a trusted state is straightforward — reboot.
  • Built Compliant: Servers should follow cyber regulations instead of requiring complicated means to stay secure.
  • Native Data-at-Rest Protection: Don’t think of third-party technologies as a default solution to protecting data-at-rest. Servers with uncompromised firmware should be the first line of protection for data.
  • Additionally, there should also be a better performance and agility in new servers. Examine a server’s ability to fine-tune performance to workloads, its OS’s features for security, storage, and virtualisation as well as cloud compatibility.

[1] U.S. Department of Homeland Security, Alert (TA18-004A)—Meltdown and Spectre Side-Channel Vulnerability Guidance, Original release date: January 3, 2018
[2] 5 The Register, The BMC in OpenBMC stands for ‘Burglarise My Computer’—thanks to irritating security flaw, January 24, 2019
[3] ZDNet, Researchers discover seven new Meltdown and Spectre attacks, November 14, 2018
[4] Ponemon Institute LLC, 2018 Cost of a Data Breach Study: Global Overview

Keen to learn more? Contact us to find out how HPE servers can improve your business

Latest Stories

Execute Business Continuity Plans with HPE Nimble

Unexpected situations can call for telecommuting measures which may result in your business…

Read More →


Wary About Cyberattacks? It’s Time To Check Your Server

Not replacing your ageing servers can result in huge risk in your organisation’s security…

Read More


Improve Data Protection and Recovery Effectiveness with HPE SimpliVity

Changing compute environments and growing redundancy of old data protection methods…

Read More →