Data Centre Design Service

Designing a data centre takes careful planning and careful consideration.

The main points we concider when designing a data centre.

1 Data Centre Location

Power
Availability of supply needs to be confirmed with both the local electricity supply authority and where applicable, the landlord. If it is envisaged that the site will need more power in the future, costs for the supply upgrade would be sought from the electrical supply authority.

Planning
Planning permission will be needed for any changes to the exterior of the building or, if a change of use is envisaged for the site ( changing car parking space into air conditioning and electrical plant areas etc), local authorities will require an energy conservation report to be submitted with the planning application. Building regulations are required for a change of internal use of the building.

Carrier and Connection
Availability of connection - preferably more than one near a carrier back bone.

Noise
Restrictions may apply in residential areas, in which case air conditioning units and generator sets need to be chosen to avoid violating local noise limits.

External space
For efficient running of the data centre infrastructure adequate space for air flow around air conditioning units is required. Space will be needed for generator sets and any external switchgear enclosures. External enclosures are a good way to free up interior space for computer rack areas within the data centre. Easy access by road is desirable, with parking and goods loading areas.

Security
Where security is an important factor access to the site needs to be controlled. This is easier if the site is located in its own compound and is in fact a requirement for tier III or IV. A site with its own supply is also preferable to avoid problems with other tenants’ electrical feedback into a shared supply. TIA 942 also recommends sites should not be located in high crime areas or adjacent to embassies.

External Influences
TIA 942 also recommends that data centres should not be located near chemical plants, landfill sites, rivers, dams, airports and mobile phone masts, due to the affects of EMC from radar and transmitter masts.

In practice, however, we have built data centres in most of the places listed above and designed the site to overcome any potential problems; for instance, a Faraday cage can be installed to mitigate any adverse affects due to radar emissions; Bunding and extra water protection measures may be installed for sites near rivers or at risk from flooding. What ever the circumstance ITE Projects will work to achieve a viable solution for the client.

2 Data Centre Building

If it is to be a co-location facility the type, size and layout of the data centre will be greatly affected by the requirements of the data centre operator and the type of client. 
TIA 942 recommends that space can be easily reallocated to meet changing environments and growth, a philosophy we have adopted, and one which has allowed our sites to grow vastly beyond the initial build stage.
The layout is designed to keep the different functions of the data centre in discrete, contained areas and to minimise movement between these areas. Movement through the comms area itself is restricted and controlled by security. The layout is also determined by the equipment which needs to be installed in a particular way in order to optimise efficiency.

Amenity areas
For a functional data centre these will be limited to toilet areas (detailed in the building regulations Part G and Part M for disabled users), and a staff rest room if required. We have constructed these more functional data centres where cost is the main consideration; They may be built in low rent areas with the use of the building remaining discreet.
A more prestigious site will incorporate showers and canteen areas for staff and clients; these could be quite extensive, depending upon the requirements of the client.

Office, Control and Meeting Areas
For a functional site a basic office area for the data centre operating staff with a security guard area at the entrance is the minimum we have constructed, however tier III or IV sites require more substantial support and administrative areas. For these sites we have designed reception and waiting areas with the clients’ visitors in mind. With these, we install feature lighting, bespoke desk and seating arrangements. Screens are used to promote the clients’ businesses. Computer terminals have also been installed around the reception perimeter for internet browsing.
Security systems such as card swipe entry and turnstile gates should be considered.
NOC and control rooms are required for tier III and IV data centres and can be viewable by clients as a feature, and incorporate custom-made desks and furniture with large format screens viewable by all NOC staff.
Larger data centres will also need to accommodate office and meeting rooms.
A build area for servicing servers and other equipment is always useful.

Goods Lift/Loading Area
Loading and storage areas are required, particularly for large data centres. Ramps or scissor lifts may be needed to lift goods onto the raised computer room floor area. Storage and build areas should be incorporated into the design.
If the facility is built on a number of levels, a goods lift will be required.Data Centre Design

Above - Rendering of part of a data centre showing comms racks seperated into discrete areas.
Rendering completed using AutoDesk Revit for a London client.

Comms Room Area
TIA 942 produced by the uptime institute, advocates the use of a raised computer room floor, to allow for cooling and cable containment.
Once the number of racks to be hosted and the power required for each rack has been determined, the design is carried out for the electrical and cooling infrastructure.
The cooling duty can then be calculated, determining the number, type and positioning of the computer room air conditioning units.
The computer room air conditioning units need to be positioned in such a way as to allow cool air to reach the furthest racks and allow hot air to return. Cool air supply grilles need to be positioned to allow the correct amount of cooling for each rack.
TIA 942 and ASHRAE both advocate positioning racks in the seven tile hot/cold aisle arrangement, to maximise the cooling efficiency of the plant.
For tier III or IV systems, the racks need separate supply paths which require dual supply racks with dual power distribution units.
It may be beneficial to split the comms area into a number of discrete computer room suites, particularly for a large area. This will allow clients their own suite within the facility and will also help with cooling.
Access routes for rack and equipment installation will be required as well as emergency exit routes.
Equipment loading may need to be checked against the rating of the floor, especially for upper levels.

 

3 Resilience of the Data Centre

The resilience of the data centre will have a major influence on the data centre design and cost of the installation.
Although it is possible to upgrade the data centre to a higher tier rating at a later date, it will be more cost affective if the intention is known at the initial design phase.
The levels of resiliency are defined in TIA 942 and range from tier I to tier IV.

Tier classifications per Uptime Institute - TIA 942

 

Tier I

Tier II

Tier III

Tier IV

Site availability

99.671%

99.749%

99.982%

99.995%

Downtime (Hrs/Yr)

28.8

22.0

1.6

0.4

Operations centre

Not required

Not required

Required

Required

Redundancy for power, cooling

N

N+1

N+1

2(N+1)

Fire suppression

Not required

Not required

FM200 or Inergen

FM200 or Inergen

Power/ cooling paths

1

1

1 active

1 passive

2 active

Tier I
A basic specification for a small computer room, susceptible to disruption from planned and unplanned activity.
May be installed in tenant occupancy.
May be left unstaffed.
May or may not have raised floor.
Single path for power and cooling and the lack of redundant components could allow a single point of failure to shut down the whole system.
May have generator and/or UPS but with no redundant components.
(We recommend a UPS bypass to allow continued system operation during maintenance).

Tier II
Single power and cooling distribution route, but with redundant (N+1) components.
Commonly used in computer rooms and smaller data centres.
May be installed in tenant occupancy.
Needs to be staffed in working hours.
Needs a raised floor.
A UPS can be taken off line for servicing without compromising the integrity of the site.
Site will include generator support for prolonged power cuts.
Site is less susceptible to disruption from planned and unplanned activity.

Tier III
A more resilient system, common in larger data centres.
Two paths for power and cooling allows for planned site infrastructure maintenance without disruption to the data centre, including testing, repair and replacement of components and addition or removal of components.
One path however is not UPS supported, and used only when the system is in maintenance, or during failure of the primary path.
(N+1) Redundant components.
Site needs to be stand alone and not part of tenant occupancy.
Staffed by extended shifts.
Raised floor with 800mm floor void height.

Tier IV
The infrastructure will have two independent power and cooling paths, each of which will have (N+1) components. The power for each path should be derived from its own separate ring main unit.
There will be no single points of failure within the infrastructure.
Needs to be on a stand alone site.
Needs to be staffed continuously.
Raised floor with 800mm floor void height.

4 Efficiency of the Data Centre

A number of factors affect the power consumption and efficiency of the data centre, principally the type of equipment chosen and the way it is installed.
The equipment selected will have a great affect on the running costs of the data centre.

Equipment
UPS in particular can vary in efficiency depending upon the model selected and can consume up to 10% of the total data centre power requirements.

 The data centre cooling system will consume the largest part of the data centre infrastructure loads, with some older less efficient systems consuming half the total load. The equipment choices include:-

DX - Direct expansion systems offer the advantages of reduced initial installation costs and the ability to add further computer room air conditioning units if and when the cooling load increases.
As each air handling and condenser unit comprises a self-contained cooling unit, a fault in one section of pipe work will not effect other air handling units; thus increasing the resilience of the facility.
DX cooling is particularly suited to small to medium computer rooms or data centres. Large facilities however, will need to accommodate a large number of pipe runs using this system.
Chilled water systems are normally employed in larger data centres and computer rooms; they have the advantage of being able to run in free cool mode with reduced running costs where conditions allow.
Chilled water systems employ a single set of pipe work to serve any number of computer room air handling units. This reduces the amount of pipe work within the facility but also introduces a single point of failure within the pumps and along the pipe route.
To increase the resiliency of the chilled water system, a second set of stand-by pumps are normally employed to allow for maintenance and pump failure.
Further resiliency can be introduced by utilising two sets of pumps and pipe work supplying alternate air handling units throughout the facility.

Best Practice Installation and Design Methods
The data centre design and installation method also has an effect on the efficiency of the installation. One of our most recent data centres which was built to a strict budget and using DX cooling, achieved a PUE rating of 1.45, with about 30% of the data centre power being used for cooling.
This was achieved by carefully selecting the air handling units and condenser units and positioning them to maximise the air flow around them.
The UPS system can produce up to 18% of the total heat load for the site, and for this we created a separate non-air conditioned area supplied either with filtered external air, or re-circulated internal air depending on the room temperatures. This drastically reduced the cooling power requirements.
Large efficiency gains can be made by adopting the hot/cold rack layout noted above. The back/exhaust side of the racks expel air into a hot aisle and back to the CRAC unit ideally without mixing with the cooler supply air. This allows the CRAC units to function more efficiently.

Further improvements can be made by isolating the hot or cold aisle or both, by installing doors or partitioning around the rack aisles, channeling the hot air into the ceiling void - this prevents any mixing of cold supply and hot air.
As data centres tend to produce large inductive loads, power factor correction should be factored in to the initial design, allowing more correction to be added as the data centre is populated.
Electricity supply authorities will impose a tariff on customers running high inductive loads in their data centres.

5 Data Centre Security

Fire Detection and Suppression
Any tier III or tier IV data centre will require a means of fire suppression, tier I or II sites will require fire detection.
Fire detection is carried out using a mixture of ionisation, optical and heat detection placed throughout the data centre within the floor void and if not fire rated, above the suspended ceiling.
The system should be monitored either on site or remotely.
VESDA (Very Early Smoke Detection Apparatus) are installed throughout computer rooms and data centres.
Fire suppression systems include FM200, which uses a minimum of plant and is suitable for computer rooms and data centres and water mist or inert gas which require more infrastructure and are more suited to larger facilities.

Environmental Monitoring and Maintenance
Environmental monitoring systems are used in comms rooms and data centres to give an early warning of a potential problem. If an air conditioning unit develops a fault, quite often the first the client is aware of the problem is equipment going into alarm due to high temperature. With the advent of blade servers and other power hungry equipment, the room temperature can rise quickly. With a suitable EMS system in place, possible downtime can be avoided and a maintenance crew on the way before it has become an emergency.

Enquiry

Have an enquiry?