Status

ENTA NOC STATUS


Date
Enta NOC feed Emergency Maintenance: VoIP platform

Wednesday 22nd February 2017 23:00 – 23:30

During the above window there will be some emergency maintenance work performed on the VoIP platform. Users may experience some disruption initiating calls or a disconnection of any call in progress. We apologise for the short notice of this work, however in order to ensure the number of users affected is at a minimum, we are performing at a time when call volumes on the platform are typically very low.



February 22, 2017
Enta NOC feed Edinburgh

We are aware of a potential problem with our Edinburgh POP.  This will affect any services which connect directly to our core network here including broadband and Ethernet.  Initial investigation indicates this may be a reboot of the device, however we will advise further once more information becomes available.



February 22, 2017
Enta NOC feed Potential VoIP Disruption

We are currently aware of a potential issue affecting some VoIP services.  We are currently investigating and collating examples.  We will provide further information once it becomes available.



February 17, 2017
Enta NOC feed At-Risk: Cardiff.core

Due to multiple link failures within the network at present, cardiff.core should be considered at risk. Engineers are already working on restoring redundant paths.

Further updates will be provided as and when they become available.



February 15, 2017
Enta NOC feed Incident: Birmingham 3 Power Loss

We are currently experiencing a loss of power within our Birmingham 3 point of presence. This is causing a loss of service for a small number of directly connected leased line customers.  Affected customers have been contacted and advised directly.

Engineers are currently investigating and already have resources on their way to site to continue their investigation. Further updates will be provided as and when they become available.



February 15, 2017
Enta NOC feed Incident: Telehouse East Satellite Device

We are aware of an incident in Telehouse East which briefly affected a small number of customers. The incident looks to have been caused by an unexpected reload of a satellite device. Services affected include Ethernet and DSL.

All services should now be restored and further updates will follow.

If you still have an issue with any individual circuits, please power cycle the device before contacting the support team. We apologise for the inconvenience caused.



February 7, 2017
Enta NOC feed Incident: Edinburgh DSL Connections

We are aware of an issue that affected DSL connections coming into our Edinburgh point of presence. Users coming into this node would have briefly lost connectivity. Our Network Operations Centre has investigated and service is now restored.

Further updates on the incident will follow in due course.



February 7, 2017
Enta NOC feed Telephone System Issue

We are aware of a problem with our telephone system and are currently experiencing difficulties making and receiving calls.  We are working with our provider to ensure functionality is restored at the earliest opportunity.



February 7, 2017
Enta NOC feed Planned Maintenance: Interxion

Thursday 9th February 2017 00:00 ~ 06:00

During the above window we will be replacing interxion2.core.enta.net and migrating customers off of interxion.core.enta.net onto a higher capacity router.

All services will be migrated to the new router, we expect downtime of approximately 1 hour during the window.

Affected services are:

All customer interconnects ( Transit / EWCS / Wholesale handoff / Pseudowire ) terminating at interxion.core.enta.net and interxion2.core.enta.net.

Traffic that would normally be routed through this equipment will be routed via alternate paths.



February 1, 2017
Enta NOC feed Incident: VoIP Disruption

We are currently aware of an issue affecting some VoIP Enrich customers. Engineers are currently investigating and further updates will be provided as and when they become available.



February 1, 2017

OVH STATUS - dedicated servers


Date
OVH status feed Rack 41D01

Task Type: Maintenance

Category: RBX3

Status: Planned


Summary : We have detected a default on a part of the Cooling System of this rack.
Actions : Replacement of the defective parts
Start : Thursday 23 February 2017 at 1.30PM CET
Affected Rack : 41D01
Impact : None
Estimated Time : 30 minutes

February 20, 2017
OVH status feed T05C32 fex vrack

Task Type: Maintenance

Category: BHS1

Status: Finished


Summary: We have detected a default on the Fex vrack which requires rebooting
Actions: Restart Fex.
Start: Thursday 16th February 2017 at 05:00am EST
Affected racks: T05C32 (Beauharnois)
Impact: Disconnection of the vrack network during the reboot time.
Estimated time: 15 min

Comments:

Date: Thu, 16 Feb 2017 11:07:21 +0100

We start the intervention

Date: Thu, 16 Feb 2017 11:14:58 +0100

Intervention done, the fault is fixed.



February 16, 2017
OVH status feed Rack 21A12

Task Type: Incident

Category: RBX2

Status: Finished


Summary: We have detected an malfunction on the switch which requires rebooting.
Actions: Restart Switch.
Start: Tuesday 14 February 2017 at 6.00AM (French time)
Affected racks: 21A12
Impact: Disconnection of the public network during the reboot time.
Estimated time: 10 Minutes.

Comments:

Date: Tue, 14 Feb 2017 07:54:57 +0100

We have an malfunction with switch after reboot, we replace it.



February 14, 2017
OVH status feed G128A19 n3 vrack

Task Type: Incident

Category: GRA1

Status: Finished


Summary: We have detected a default on the N3 vrack which requires rebooting
Actions: Restart N3.
Affected racks: G128A19
Impact: Disconnection of the vrack network during the reboot time.
Estimated time: 15 min

February 12, 2017
OVH status feed Rack 03B04

Task Type: Maintenance

Category: RBX1

Status: Finished


Summary: We have detected a default on the switch which requires rebooting.
Actions: Restart Switch.
Start: Thursday 26 January 2017 at 6.00AM (French time)
Affected racks: 03B04
Impact: Disconnection of the public network during the reboot time.
Estimated time: 5 minutes

Comments:

Date: Thu, 26 Jan 2017 05:45:02 +0100

intervention finished.



January 26, 2017
OVH status feed Room 8

Task Type: Incident

Category: RBX1

Status: Finished


We have an malfunction with the cooling system for servers in room 8, our technicians team intervene for replace defective part.

Comments:

Date: Wed, 25 Jan 2017 11:12:32 +0100

intervention finished.



January 25, 2017
OVH status feed Vrack Tasks Canada

Task Type: Incident

Category: BHS1

Status: In progress


We are currently experiencing difficulties on a Beauharnois Vrack equipment, the addition of new services to the Vrack is impacted.
No impact is expected on services already connected to the Vrack.

A support ticket has been opened at the manufacturer, we investigate.

Comments:

Date: Thu, 19 Jan 2017 11:03:49 +0100

Tasks are back in service. We continue to monitor.



January 19, 2017
OVH status feed Signing key debuian/ubuntu ARM

Task Type: Maintenance

Category: all (dedicated servers)

Status: Finished


We changed the internal signing key on the following repositories :

http://last.public.ovh.hdaas.snap.mirrors.ovh.net/debian/
http://last.public.ovh.hdaas.snap.mirrors.ovh.net/ubuntu/

Therefore, you might experience the following error during an apt update :

W: GPG error: http://last.public.ovh.hdaas.snap.mirrors.ovh.net/debian jessie Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY DC286CCC60B23E2F
E: The repository 'http://last.public.ovh.hdaas.snap.mirrors.ovh.net/debian jessie Release' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

To add the new key on your server, please issue this command :

# wget -qO - http://last.public.ovh.hdaas.snap.mirrors.ovh.net/debian/archive.key | apt-key add -
# apt update


January 10, 2017
OVH status feed 42E05

Task Type: Maintenance

Category: RBX3

Status: Finished


Summary: We have detected a default on the switch public of this rack (42E05) which requires rebooting.
Actions: Restart
Start: 08/01/2017 at 6 AM
Affected racks: 42E05
Impact: Disconnection of the public network during the reload.
Estimated time: 15 mintues

Comments:

Date: Sun, 08 Jan 2017 06:02:06 +0100

We start the intervention.

Date: Sun, 08 Jan 2017 06:21:27 +0100

The reload doesn't fix the default. We are replacing it.

Date: Sun, 08 Jan 2017 07:43:01 +0100

The new switch is up, and all servers are responding to the ping.



January 8, 2017
OVH status feed Rack 73B43

Task Type: Incident

Category: SBG1

Status: Finished


There is an electrical issue on rack 73B43.
A technician is onsite to fix the issue.

Comments:

Date: Sat, 07 Jan 2017 04:03:35 +0100

The technician found the root cause and fixed it.

Date: Sat, 07 Jan 2017 04:41:58 +0100

All the machines are back up.



January 7, 2017

OVH STATUS - EMAIL


Date
OVH status feed FS#22748 — Mailproxy

Task Type: Incident

Category: MX

Status: Finished


An overload is detected on the entry point for emails.
We delay the delivery of the messages temporarily.
There are delays in receiving messages.

Comments:

Date: Tue, 24 Jan 2017 14:51:36 +0100

We had an overload on the internal SQL servers for the Mailproxy. We have stabilized the SQL cluster and we are starting to deliver the emails stored on the infrastructure. We are also beginning to accept new messages in a progressive way.

Date: Tue, 24 Jan 2017 14:51:56 +0100

We process more than 70,000 messages per minute currently. We catch up with the delay in delivering the messages. New messages are accepted intermittently.

Date: Tue, 24 Jan 2017 14:52:18 +0100

Traffic seems to stabilize, we have caught up. There are some awaiting delivery messages that will be delivered within a few minutes.



January 24, 2017
OVH status feed FS#20441 — Webmails

Task Type: Modernization

Category: webmail

Status: Finished


We will change webmail cluster to another infra.
This operation is scheduled for Wednesday night, if all is ready to do it.
Downtime is expected.

Comments:

Date: Wed, 21 Sep 2016 18:50:30 +0200

We will start intervention at 23h this night, it will take around 3 hours. During this intervention webmail access will be unavailable.

Date: Thu, 22 Sep 2016 19:55:07 +0200

Starting...

Date: Thu, 22 Sep 2016 19:55:36 +0200

Intervention complete, no problem during migration :) We will take care of the new infrastructure tomorrow and during the coming weeks.



September 22, 2016
OVH status feed FS#19650 — Orange, Wanadoo

Task Type: Incident

Category: All outgoing emails

Status: In progress


We have detected delays on the email delivery to orange and wanadoo.
We are working on it.

August 10, 2016
OVH status feed FS#19358 — redirect

Task Type: Incident

Category: all

Status: In progress


We have detected mail delivery delays on redirect.ovh.net servers because of a large number of messages.
We are working to fix this problem.

July 26, 2016
OVH status feed FS#19138 — Mail

Task Type: Incident

Category: all

Status: Finished


We have detected delays on all operations related to Mail accounts (create, delete, change password etc ..)
We are working to fix this problem.

July 13, 2016
OVH status feed FS#19099 — Webmail

Task Type: Modernization

Category: webmail

Status: Planned


We are planning to migrate the Webmail infrastructure to a newer and more powerful infrastructure, on July 20th.
A few-minute service interruption will be required to acrry out this operation, we will try to do it at night.
We will update this task as soon as we get more details.


July 7, 2016
OVH status feed FS#18594 — XC38

Task Type: Modernization

Category: exchange

Status: Planned


Hello,

We will proceed with the migration the following Exchange accounts to Exchange 2016 onTuesday, June 14th at 7pm.
XC38.mail.ovh.net


During this intervention, access to mailboxes will be unavailable.

You will be notified by email when your platform will be migrated.

You can also track the status of migration on this interface: http://migrationstatus.mail.ovh.net/

The Exchange Team

June 13, 2016
OVH status feed FS#18505 — Robot installation

Task Type: Incident

Category: all email services

Status: Finished


We detected delays on the email installation robot. We corrected the problem and the robot is catching up.

June 9, 2016
OVH status feed FS#18504 — mx1.ovh.net

Task Type: Incident

Category: exchange

Status: In progress


We have detected abnormal delays on the delivery of nessages to mx1.ovh.net, our teams are working to solve this problem.
Some servers mx1.ovh.net are not responding, the outgoing server is sending a timeout.

Comments:

Date: Thu, 09 Jun 2016 15:50:14 +0200

We have excluded the server that is generating the problem. The message delivery is restored. We are monitoring.



June 9, 2016
OVH status feed FS#18416 — Hosted 2013

Task Type: Incident

Category: exchange

Status: In progress


Hello,

We encountered a problem on an Exchange server.
Different DB moved on the second cluster node.

We are doing mailbox move to avoid overloading on the server.

The Exchange Team

Comments:

Date: Mon, 06 Jun 2016 19:49:16 +0200

Hello, We synchronized a Database of 1.2To for the moment. We will continue on our side. Exchange Team.

Date: Mon, 06 Jun 2016 19:51:30 +0200

(mbx019/mbx020) a new database has been synchronized. we will switch the db on the second node between 12 & 1pm.

Date: Mon, 06 Jun 2016 19:52:06 +0200

EX26 => rebuild du raid system + raid data => no problems with the cluster synchronization.

Date: Mon, 06 Jun 2016 19:52:16 +0200

EX26 => rebuild du raid system => 100% EX26 => rebuild raid data => 20%

Date: Mon, 06 Jun 2016 19:52:37 +0200

(mbx019/mbx020) il reste 25 Db a resync soit environ 27To

Date: Mon, 06 Jun 2016 19:52:48 +0200

EX26 => rebuild raid data => 46%

Date: Wed, 08 Jun 2016 19:57:19 +0200

EX26 => rebuild raid data => 66%

Date: Wed, 08 Jun 2016 19:57:48 +0200

(mbx019/mbx020) 4 Db synchronization in progress

Date: Wed, 08 Jun 2016 19:58:20 +0200

EX26 => rebuild raid data => 100%

Date: Wed, 08 Jun 2016 19:58:46 +0200

(mbx019/mbx020) 4 Db synchronization in progress

Date: Wed, 08 Jun 2016 19:59:17 +0200

(mbx019/mbx020) 1 Db synchronization in progress

Date: Wed, 08 Jun 2016 20:00:03 +0200

(mbx019/mbx020) 2 Db synchronization in progress



June 8, 2016

GRADWELL STATUS


Date
Gradwell status feed RESOLVED: Loss of Connectivity on Broadband Services
This service post has been resolved. The following closing update was provided: The incident is now resolved. It may be necessary to restart your router equipment if the service does not recover automatically. If you have any further issues then please contact the support team.

July 26, 2016
Gradwell status feed UPDATED: Loss of Connectivity on Broadband Services
Our suppliers engineers are still investigating. A further update will be provided as soon as it becomes available. We regret any inconvenience this may cause.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed UPDATED: Loss of Connectivity on Broadband Services
Our suppliers Engineers are on route to continue the investigation on site.

We are awaiting a further update as to the cause of the outage. An ETR will be made available once the root cause is established.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed UPDATED: Loss of Connectivity on Broadband Services
This fault is currently still being worked on by our upstream supplier. At this stage we have not been provided an estimated resolution time but have been advised that this is being worked on as a critical issue.

We will pass on any updates as soon as they are available.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed NEW: Loss of Connectivity on Broadband Services
Services Affected:
Broadband


Overview:
An upstream supplier is currently experiencing a service outage. This is affecting multiple exchanges across the UK causing total loss of connectivity on broadband services.

Impact:
Customers using broadband services including ADSL and FTTC service may have lost connection.
This will be affecting any VoIP services using the broadband connection.

Gradwell Action:
Our systems operations team is liaising with our upstream suppliers as they work on this fault as an urgent matter.
We will post a further update by 17:00. Apologies for any inconvenience caused.

We are currently working to resolve this issue and expect to provide the next update by Tuesday 26th July 2016 @ 08:00 (BST)

July 25, 2016
Gradwell status feed RESOLVED: Intermittent inbound email issues
This service post has been resolved. The following closing update was provided: Our monitoring and testing is showing all mail behaving correctly. This status will now be closed however our system operators will continue to monitor in line with our normal procedures.

If you experience any further issues please contact our support teams on 01225 800888 and we will investigate these as isolated cases.

We apologise for any inconvenience caused.

July 22, 2016
Gradwell status feed UPDATED: Potential Connectivity Issues
Following routing changes made earlier today we are not seeing any issues with connectivity.

If you are still experiencing connectivity issues please power cycle your equipment and contact our support team.

Apologies for any inconvenience caused.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 16:30 (BST)

July 21, 2016
Gradwell status feed RESOLVED: Potential Connectivity Issues
This service post has been resolved. The following closing update was provided: Following routing changes made earlier today we are not seeing any issues with connectivity.

If you are still experiencing connectivity issues please power cycle your equipment and contact our support team.

Apologies for any inconvenience caused.

July 21, 2016
Gradwell status feed UPDATED: Potential Connectivity Issues
Our upstream suppliers have routed traffic away from the affected datacenter, which has prevented any further issues.

Traffic will not return to the original datacenter until we receive confirmation that the power failure has been fully resolved and tested.

If you are currently experiencing connectivity issues please power cycle your equipment before contacting our support team.

We will continue to monitor and provide updates.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 16:30 (BST)

July 21, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Our monitoring and testing is showing improved performance however our engineers are still investigating a few reports of inbound delivery issues.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 21, 2016
Gradwell status feed NEW: Potential Connectivity Issues
Services Affected:
Broadband


Overview:
Our monitoring is showing that a wholesale supplier is experiencing power failures in a London datacenter.


Impact:
This issue may affect some customers' internet connectivity.
Although our suppliers have resilient networks, the problem may affect service stability.

Gradwell Action:
Our supplier is working to identify and resolve this issue. We are in regular contact with them and will provide updates on the issue as they are available.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 16:30 (BST)

July 21, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Our monitoring and testing is showing all mail being delivered successfully.

This status will be closed later today if no further issues are reported.

If you do experience any further issues with inbound emails being delayed or missing please contact our support teams on 01225 800888 who will be able to assist further.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 20, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Following the work completed by our engineers today we have seen a marked improvement in performance and all testing and monitoring is showing emails being delivered successfully.

We will continue to test and monitor and will post a further update tomorrow morning.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 19, 2016
Gradwell status feed UPDATED: Intermittent inbound email issues
Our engineers have identified the root cause and are making improvements to our email platform to resolve this issue.

Following the completion of these changes we will be conducting testing and monitoring to ensure that this issue is fully resolved and service has returned to normal.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 19, 2016
Gradwell status feed NEW: Intermittent inbound email issues
Services Affected:
Email Hosting


Overview:
Our engineers are investigating reports of sporadic issues with inbound email including emails being delayed or failing to arrive after some time.

Impact:
Customers may find that they sporadically see large delays with receiving emails or certain messages are not received.

Gradwell Action:
Our engineers are investigating this issue and we will post a further update by 11:00 tomorrow.

We apologise for any inconvenience caused.

We are currently working to resolve this issue and expect to provide the next update by Thursday 21st July 2016 @ 17:00 (BST)

July 18, 2016