Status

ENTA NOC STATUS


Date
Enta NOC feed Incident: Glasgow

We have experienced two brief losses of power to our equipment located in our Glasgow PoP. Engineers are investigating with our colocation provider. Power is currently restored however should still be considered at risk. Further updates will be provided when available.



March 23, 2017
Enta NOC feed Incident: Edinburgh

We have experienced a loss of power to our equipment located in our Edinburgh PoP. Power has already been restored however some services still remain affected. Engineers are investigating and further updates will be provided when available.



March 21, 2017
Enta NOC feed Planned Maintenance: VoIP Platform

Planned Maintenance on the VoIP platform is due to continue on Tuesday on Thursday each week between 23:00 – 23:30 until further notice.  Users may experience some disruption initiating calls or a disconnection of any call in progress once the work is underway.  In order to ensure the number of users affected is at a minimum, we are looking to complete any maintenance work outside of peak times.

We will comment further on this post should there be any individual dates where the window may be amended.  We will notify via an independent post should the window change indefinitely.



March 20, 2017
Enta NOC feed At-Risk: Cambridge to Telehouse fibre link maintenance
30-Mar-2017 02:00 GMT to 30-Mar-2017 06:00 GMT
During the above window one of our fibre suppliers will be carrying out maintenance on one of our links between Cambridge and Telehouse East. Whilst the actual downtime is expected to be less, the full 4 hrs may be required to account for any unforeseen issues. This work is not expected to be service affecting, however, you may see unusual paths and increase in latency while traffic is automatically rerouted via alternate routes.


March 16, 2017
Enta NOC feed Incident: Telford Power Loss

We have just experienced a loss of power to the Telford Data Centre. Generator and UPS protection kicked in as expected however some internal systems remain offline. Engineers are currently investigating.



March 15, 2017
Enta NOC feed Planned Maintenance: VoIP Platform

We are planning maintenance work on the VoIP platform on both Tuesday 14th March and Thursday 16th March.  Tuesday’s work will be completed between 23:00 – 23:30 and users may experience some disruption initiating calls or a disconnection of any call in progress once the work has begun.  Thursday’s window will be confirmed via a further comment on this post upon completion of Tuesday’s activity.  In order to ensure the number of users affected is at a minimum, we are looking to complete any maintenance work outside of peak times.

Any further related maintenance after Thursday will be notified via it’s own independent post.



March 14, 2017
Enta NOC feed Incident: PWAN LNS

We are aware of a short disruption to a small number of PWAN connections at approximately 12:24. This was caused by an unscheduled reload of one LNS.

Any connections affected during the incident would have automatically connected to an alternative LNS, where service would have been restored.

We apologise for any inconvenience caused.



March 9, 2017
Enta NOC feed At Risk: PWAN Infrastructure

Thursday 09/03/17 23:00 – 23:59

This evening, during the above window, we will be performing an emergency software upgrade on some devices within our PWAN infrastructure.

No service impact to customers is expected.



March 9, 2017
Enta NOC feed Planned Maintenance: VoIP Platform

We are planning maintenance work on the VoIP platform on both Tuesday 7th March and Thursday 9th March.  Tuesday’s work will be completed between 23:00 – 23:30 and users may experience some disruption initiating calls or a disconnection of any call in progress once the work has begun.  Thursday’s window will be confirmed via a further comment on this post upon completion of Tuesday’s activity.  In order to ensure the number of users affected is at a minimum, we are looking to complete any maintenance work outside of peak times.

Any further related maintenance after Thursday will be notified via it’s own independent post.



March 6, 2017
Enta NOC feed Planned Maintenance: VoIP Platform

We are planning maintenance work on the VoIP platform on both Tuesday 28th February and Thursday 2nd March.  Tuesday’s work will be completed between 23:00 – 23:30 and users may experience some disruption initiating calls or a disconnection of any call in progress once the work has begun.  Thursday’s window will be confirmed via a further comment on this post upon completion of Tuesday’s activity.  In order to ensure the number of users affected is at a minimum, we are looking to complete any maintenance work outside of peak times.

Any further related maintenance after Thursday will be notified via it’s own independent post.



February 27, 2017

OVH STATUS - dedicated servers


Date
OVH status feed Rack 42C11

Task Type: Maintenance

Category: RBX3

Status: Finished


Summary: We have detected a fault on the reboot remote equipment
Actions: Replace remote system
Start: Wednesday 29 march 2017 at 6.00AM (local time)
Affected racks: 42C11
Impact: no reboot remote available during intervention.
Estimated time: 1 hour

Comments:

Date: Wed, 29 Mar 2017 06:51:38 +0200

Intervention started.

Date: Wed, 29 Mar 2017 09:36:18 +0200

The remote system was replaced. Intervention is done.



March 29, 2017
OVH status feed 75A49

Task Type: Maintenance

Category: SBG1

Status: Finished


Summary : We have detected a faut on the electric part of this RACK.
Actions : Replacement of the defective parts.
start : Tuesday March 28th starting at 6:00 AM
Affected Rack : 75A49
Impact : Shutdown of the servers while the part is replaced
Estimated Time : 1h

Comments:

Date: Tue, 28 Mar 2017 06:18:16 +0200

We are starting the intervention.

Date: Tue, 28 Mar 2017 07:48:08 +0200

The defective parts was replaced. Intervention is done.



March 28, 2017
OVH status feed G107B05

Task Type: Maintenance

Category: GRA1

Status: Finished


Summary : We have detected a faut on the electric part of this RACK.
Actions : Replacement of the defective parts.
start : Tuesday March 28th starting at 6:00 AM
Affected Rack : G107B05
Impact : Shutdown of the servers while the part is replaced
Estimated Time : 1h

Comments:

Date: Tue, 28 Mar 2017 06:21:44 +0200

We are starting the intervention.

Date: Tue, 28 Mar 2017 07:42:18 +0200

The defective parts is replaced. Intervention is done.



March 28, 2017
OVH status feed FS#23573 — T06A

Task Type: Incident

Category: BHS3

Status: Finished


We had a short circuit on one of the paths feeding room T06A.

We also lost the second path.

The servers are being restarted.

We are investigating.

March 20, 2017
OVH status feed mrtg-rbx-100.ovh.net

Task Type: Maintenance

Category: RBX1

Status: Finished


As a preventive measure, we plan to replace a data pool of disks on this machine on Tuesday 23 August (between 21h and 22h CET), no impact is expected for customers.

Comments:

Date: Tue, 23 Aug 2016 21:47:48 +0200

The maintenance is postponed to tomorrow, Wednesday 24 August (between 21h and 22h CET).

Date: Thu, 25 Aug 2016 04:22:15 +0200

Maintenance is finally postponed to Thursday 25 August (between 21h and 22h CET).



March 20, 2017
OVH status feed Rack 41D01

Task Type: Maintenance

Category: RBX3

Status: Finished


Summary : We have detected a default on a part of the Cooling System of this rack.
Actions : Replacement of the defective parts
Start : Thursday 23 February 2017 at 1.30PM CET
Affected Rack : 41D01
Impact : None
Estimated Time : 30 minutes

March 20, 2017
OVH status feed Rack 14D03

Task Type: Maintenance

Category: RBX1

Status: Finished


Summary: We have detected an malfunction on the hard reboot equipment.
Actions: Replace hard reboot equipment
Start: Thursday 15 December at 6.00AM (French time)
Affected racks: 14D03
Impact: Power failure of servers
Estimated time: 1 hour

March 20, 2017
OVH status feed FS#19339 — Netboot Strasbourg

Task Type: Maintenance

Category: paris 19

Status: Finished


We have detected delays in the Strasburg netboot infrastructure. We are investigating.

March 20, 2017
OVH status feed Vrack Tasks Canada

Task Type: Incident

Category: BHS1

Status: Finished


We are currently experiencing difficulties on a Beauharnois Vrack equipment, the addition of new services to the Vrack is impacted.
No impact is expected on services already connected to the Vrack.

A support ticket has been opened at the manufacturer, we investigate.

Comments:

Date: Thu, 19 Jan 2017 11:03:49 +0100

Tasks are back in service. We continue to monitor.



March 20, 2017
OVH status feed Rack 9G06

Task Type: Incident

Category: RBX1

Status: Finished


The rack 9G06 is currently having an electrical issue, our team in onsite to rectify the situation.

March 20, 2017

OVH STATUS - EMAIL


Date
OVH status feed FS#22748 — Mailproxy

Task Type: Incident

Category: MX

Status: Finished


An overload is detected on the entry point for emails.
We delay the delivery of the messages temporarily.
There are delays in receiving messages.

Comments:

Date: Tue, 24 Jan 2017 14:51:36 +0100

We had an overload on the internal SQL servers for the Mailproxy. We have stabilized the SQL cluster and we are starting to deliver the emails stored on the infrastructure. We are also beginning to accept new messages in a progressive way.

Date: Tue, 24 Jan 2017 14:51:56 +0100

We process more than 70,000 messages per minute currently. We catch up with the delay in delivering the messages. New messages are accepted intermittently.

Date: Tue, 24 Jan 2017 14:52:18 +0100

Traffic seems to stabilize, we have caught up. There are some awaiting delivery messages that will be delivered within a few minutes.



January 24, 2017
OVH status feed FS#20441 — Webmails

Task Type: Modernization

Category: webmail

Status: Finished


We will change webmail cluster to another infra.
This operation is scheduled for Wednesday night, if all is ready to do it.
Downtime is expected.

Comments:

Date: Wed, 21 Sep 2016 18:50:30 +0200

We will start intervention at 23h this night, it will take around 3 hours. During this intervention webmail access will be unavailable.

Date: Thu, 22 Sep 2016 19:55:07 +0200

Starting...

Date: Thu, 22 Sep 2016 19:55:36 +0200

Intervention complete, no problem during migration :) We will take care of the new infrastructure tomorrow and during the coming weeks.



September 22, 2016
OVH status feed FS#19650 — Orange, Wanadoo

Task Type: Incident

Category: All outgoing emails

Status: In progress


We have detected delays on the email delivery to orange and wanadoo.
We are working on it.

August 10, 2016
OVH status feed FS#19358 — redirect

Task Type: Incident

Category: all

Status: In progress


We have detected mail delivery delays on redirect.ovh.net servers because of a large number of messages.
We are working to fix this problem.

July 26, 2016
OVH status feed FS#19138 — Mail

Task Type: Incident

Category: all

Status: Finished


We have detected delays on all operations related to Mail accounts (create, delete, change password etc ..)
We are working to fix this problem.

July 13, 2016
OVH status feed FS#19099 — Webmail

Task Type: Modernization

Category: webmail

Status: Planned


We are planning to migrate the Webmail infrastructure to a newer and more powerful infrastructure, on July 20th.
A few-minute service interruption will be required to acrry out this operation, we will try to do it at night.
We will update this task as soon as we get more details.


July 7, 2016
OVH status feed FS#18594 — XC38

Task Type: Modernization

Category: exchange

Status: Planned


Hello,

We will proceed with the migration the following Exchange accounts to Exchange 2016 onTuesday, June 14th at 7pm.
XC38.mail.ovh.net


During this intervention, access to mailboxes will be unavailable.

You will be notified by email when your platform will be migrated.

You can also track the status of migration on this interface: http://migrationstatus.mail.ovh.net/

The Exchange Team

June 13, 2016
OVH status feed FS#18505 — Robot installation

Task Type: Incident

Category: all email services

Status: Finished


We detected delays on the email installation robot. We corrected the problem and the robot is catching up.

June 9, 2016
OVH status feed FS#18504 — mx1.ovh.net

Task Type: Incident

Category: exchange

Status: In progress


We have detected abnormal delays on the delivery of nessages to mx1.ovh.net, our teams are working to solve this problem.
Some servers mx1.ovh.net are not responding, the outgoing server is sending a timeout.

Comments:

Date: Thu, 09 Jun 2016 15:50:14 +0200

We have excluded the server that is generating the problem. The message delivery is restored. We are monitoring.



June 9, 2016
OVH status feed FS#18416 — Hosted 2013

Task Type: Incident

Category: exchange

Status: In progress


Hello,

We encountered a problem on an Exchange server.
Different DB moved on the second cluster node.

We are doing mailbox move to avoid overloading on the server.

The Exchange Team

Comments:

Date: Mon, 06 Jun 2016 19:49:16 +0200

Hello, We synchronized a Database of 1.2To for the moment. We will continue on our side. Exchange Team.

Date: Mon, 06 Jun 2016 19:51:30 +0200

(mbx019/mbx020) a new database has been synchronized. we will switch the db on the second node between 12 & 1pm.

Date: Mon, 06 Jun 2016 19:52:06 +0200

EX26 => rebuild du raid system + raid data => no problems with the cluster synchronization.

Date: Mon, 06 Jun 2016 19:52:16 +0200

EX26 => rebuild du raid system => 100% EX26 => rebuild raid data => 20%

Date: Mon, 06 Jun 2016 19:52:37 +0200

(mbx019/mbx020) il reste 25 Db a resync soit environ 27To

Date: Mon, 06 Jun 2016 19:52:48 +0200

EX26 => rebuild raid data => 46%

Date: Wed, 08 Jun 2016 19:57:19 +0200

EX26 => rebuild raid data => 66%

Date: Wed, 08 Jun 2016 19:57:48 +0200

(mbx019/mbx020) 4 Db synchronization in progress

Date: Wed, 08 Jun 2016 19:58:20 +0200

EX26 => rebuild raid data => 100%

Date: Wed, 08 Jun 2016 19:58:46 +0200

(mbx019/mbx020) 4 Db synchronization in progress

Date: Wed, 08 Jun 2016 19:59:17 +0200

(mbx019/mbx020) 1 Db synchronization in progress

Date: Wed, 08 Jun 2016 20:00:03 +0200

(mbx019/mbx020) 2 Db synchronization in progress



June 8, 2016

GRADWELL STATUS


Date
Gradwell status feed RESOLVED: Loss of Connectivity on Broadband Services
This service post has been resolved. The following closing update was provided: The incident is now resolved. It may be necessary to restart your router equipment if the service does not recover automatically. If you have any further issues then please contact the support team.

July 26, 2016
Gradwell status feed UPDATED: Loss of Connectivity on Broadband Services
Our suppliers engineers are still investigating. A further update will be provided as soon as it becomes available. We regret any inconvenience this may cause.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed UPDATED: Loss of Connectivity on Broadband Services
Our suppliers Engineers are on route to continue the investigation on site.

We are awaiting a further update as to the cause of the outage. An ETR will be made available once the root cause is established.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed UPDATED: Loss of Connectivity on Broadband Services
This fault is currently still being worked on by our upstream supplier. At this stage we have not been provided an estimated resolution time but have been advised that this is being worked on as a critical issue.

We will pass on any updates as soon as they are available.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed NEW: Loss of Connectivity on Broadband Services
Services Affected:
Broadband


Overview:
An upstream supplier is currently experiencing a service outage. This is affecting multiple exchanges across the UK causing total loss of connectivity on broadband services.

Impact:
Customers using broadband services including ADSL and FTTC service may have lost connection.
This will be affecting any VoIP services using the broadband connection.

Gradwell Action:
Our systems operations team is liaising with our upstream suppliers as they work on this fault as an urgent matter.
We will post a further update by 17:00. Apologies for any inconvenience caused.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed RESOLVED: Intermittent inbound email issues
This service post has been resolved. The following closing update was provided: Our monitoring and testing is showing all mail behaving correctly. This status will now be closed however our system operators will continue to monitor in line with our normal procedures.

If you experience any further issues please contact our support teams on 01225 800888 and we will investigate these as isolated cases.

We apologise for any inconvenience caused.

July 22, 2016
Gradwell status feed UPDATED: Potential Connectivity Issues
Following routing changes made earlier today we are not seeing any issues with connectivity.

If you are still experiencing connectivity issues please power cycle your equipment and contact our support team.

Apologies for any inconvenience caused.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 16:30 (BST)

July 21, 2016
Gradwell status feed RESOLVED: Potential Connectivity Issues
This service post has been resolved. The following closing update was provided: Following routing changes made earlier today we are not seeing any issues with connectivity.

If you are still experiencing connectivity issues please power cycle your equipment and contact our support team.

Apologies for any inconvenience caused.

July 21, 2016
Gradwell status feed UPDATED: Potential Connectivity Issues
Our upstream suppliers have routed traffic away from the affected datacenter, which has prevented any further issues.

Traffic will not return to the original datacenter until we receive confirmation that the power failure has been fully resolved and tested.

If you are currently experiencing connectivity issues please power cycle your equipment before contacting our support team.

We will continue to monitor and provide updates.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 16:30 (BST)

July 21, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Our monitoring and testing is showing improved performance however our engineers are still investigating a few reports of inbound delivery issues.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 21, 2016
Gradwell status feed NEW: Potential Connectivity Issues
Services Affected:
Broadband


Overview:
Our monitoring is showing that a wholesale supplier is experiencing power failures in a London datacenter.


Impact:
This issue may affect some customers' internet connectivity.
Although our suppliers have resilient networks, the problem may affect service stability.

Gradwell Action:
Our supplier is working to identify and resolve this issue. We are in regular contact with them and will provide updates on the issue as they are available.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 16:30 (BST)

July 21, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Our monitoring and testing is showing all mail being delivered successfully.

This status will be closed later today if no further issues are reported.

If you do experience any further issues with inbound emails being delayed or missing please contact our support teams on 01225 800888 who will be able to assist further.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 20, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Following the work completed by our engineers today we have seen a marked improvement in performance and all testing and monitoring is showing emails being delivered successfully.

We will continue to test and monitor and will post a further update tomorrow morning.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 19, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Our engineers have identified the root cause and are making improvements to our email platform to resolve this issue.

Following the completion of these changes we will be conducting testing and monitoring to ensure that this issue is fully resolved and service has returned to normal.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 19, 2016
Gradwell status feed NEW: Intermittent inbound email issues
Services Affected:
Email Hosting


Overview:
Our engineers are investigating reports of sporadic issues with inbound email including emails being delayed or failing to arrive after some time.

Impact:
Customers may find that they sporadically see large delays with receiving emails or certain messages are not received.

Gradwell Action:
Our engineers are investigating this issue and we will post a further update by 11:00 tomorrow.

We apologise for any inconvenience caused.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 18, 2016