A Letter to CentriLogic CEO Robert Offley

To: roffley@centrilogic.com
CC: mrok@centrilogic.com, jedward@centrilogic.com, mrok@centrilogic.com, kaplin@centrilogic.com, cstaats@centrilogic.com, jreinard@centrilogic.com, jreinard@dacentec.com
BCC: robert.offley@centrilogic.com, mrok@centrilogic.com, jeff.edward@centrilogic.com, monica.rok@centrilogic.com, kevin.aplin@centrilogic.com, charlotte.staats@centrilogic.com, jason.reinard@centrilogic.com, jason.reinard@dacentec.com, offley@centrilogic.com, offleyr@centrilogic.com, levine@centrilogic.com, leviner@centrilogic.com, rok@centrilogic.com, rokm@centrilogic.com, aplin@centrilogic.com, aplink@centrilogic.com, staats@centrilogic.com, staatsc@centrilogic.com

Mr. Offley,

It saddens me that I need to be writing this message to you today, however I’m at my whits end and there seems to be nothing but lost hope with the support management at your Lenoir, NC facility, Dacentec. I do appreciate you taking the time to read through this message, and providing some feedback.

While I understand that we’re a very small fish in a very large pond, I’m a firm believer that a customers worth in feedback is calculated at a much higher rate than their worth in dollars. I have a long history and background in delivering exceptional customer experiences through support and service, which makes me incredibly critical of poor service/support I get from companies that I choose to do business with.

This note is about issues I’ve had with services and support at Dacentec. Some of these issues stem from lack of infrastructure / training, and some stem from complete lack of care for your customers. I’ll outline each below.

We turned up services with Dacentec on August 21, 2015, starting with 1U colocation, a /29 IP block allocation, and then adding BGP announcement of our /22 with the anticipation that we can add additional servers and eventually move into cabinets as we expand our business.

September 5, 2015 : Ticket # 770027 : Network Packet Flood
We were advised that our port was shut due to a network packet flood to our machine. We had made some changes to services on the machine, and it looks like it got compromised. We requested additional information origin and/or destination IPs and were told that only Layer 2 monitoring is performed. That’s a scary lack of insight to what is occurring on your network.

September 23, 2015 : Ticket # 394140 : Consolidate Invoices/Billing Dates
We requested to have our services consolidated to a single invoice/billing date. Currently we receive (just for the colocation and “extras”) 3 invoices with 3 due dates. We were told that this can’t be done, until we pressed harder. The answer never made sense so we just dropped the ticket.

October 19, 2015 : Ticket # 319017 : URGENT – Server Offline
Our server went offline and we were unable to access the machine, IPMI (not on our IP space at the time), or the gateway. Requested a Root Cause Analysis for the issue (advised it would be available next day). On 10/27/2015 still no RCA/PIR, requested escalation to management. Received PIR on 10/28 with no escalation to management.

October 21, 2015 : Ticket # 379284 : Abuse – Packet flood from
Packet flood notification from an IP that we’re no longer using (we’ve since moved to our own IP space). Requested information on source/destination – again unable to provide.

November 22, 2015 : Ticket # 696006 : Can not access server
Advised of inability to access machine. Asked for root cause and was told “adjustments made by the network team”. On 11/23 asked what remediation steps were being taken since we pay for BGP announcement of our own IP space. We had no ability to ping our gateway, nor pull any BGP information about our announced space. Requested update on 11/25. Requested update on 11/30. Requested update on 12/1. Received response on 12/1 indicating a network hardware issue – completely different than what I was told originally.

November 22, 2015 : Tickets # 804422 & 405648 : SLA Credits
No response to tickets until 12/1.

December 1, 2015 : Ticket # 529179 : Support Management
Requested Support Management contact info. Received response that ticket was escalated. Received call on 12/2 (voicemail – see attached). Followed up by phone a few days later and left a voicemail. No response. On 12/10/2015 advised on same ticket that no response has been received. A reminder sent to management by support staff. Left another voicemail for Jason on 12/11. No response. 12/14/2015 I updated the ticket that i had yet to receive a call back. Ticket notated that the agent was informed that management would call me Tuesday. It’s now Tuesday at 5:36p Eastern.

December 9, 2015 : Ticket # 927333 : Server Outage this morning
Submitted ticket with bandwidth graph showing 3 hour outage. Requested root cause. Advised of carrier outage and PIR would be made available in 24-48 hours. 12/11 (Friday) requested update on PIR availability. Received PIR 3 hours later. PIR indicates that review and approval was by Jason Reinard – the same person I was told in Ticket # 529179 was out of the office and would call me back on Monday. I asked when the PIR was made available for customer consumption, and support advised 5 minutes prior to my receiving it. Either it was available before that and wasn’t sent to us until requested OR Jason Reinard did approve it on Friday but for some reason was unavailable to respond to me after numerous attempts.

Today, December 15, 2015 : 5:41p Eastern : Still no phone call, email or other form of communication from Dacentec management after my repeated escalation requests.

As an organization, we were excited to find a data center provider in North Carolina that we could grow with / grow into. The prices are reasonable, and the ability to advertise our IP space a huge plus. Support, for the most part, is lacking. It’s unfortunate that your support staff have to deal with people like me, because it’s not their fault. Teams win. Management loses. It is absolutely apparent to me that the issues stem from management – not from your support agents themselves.

I anticipate a response shortly, and to ensure that this is properly received, a copy of this message is also being posted on your Facebook page, sent to you via Twitter, and on LinkedIn.

I can be reached at 415.488.5444 at any time.

Best regards,

Chris Hesselrode
General Manager | Conflux Technologies LLC

P. 919.867.1456
F. 919.930.8699
M. 415.488.5444

Update: added contact e-mail addresses for Centrilogic/Dacentec executive staff for existing customers experiencing issues with Dacentec services.