B10WH

web hosting media

Archive for the ‘Have You Say?’ Category

New Technologies Ease Disaster Recovery Protection

Posted by hosttycoon On April - 21 - 2017

The latest virtualization technologies, their Cloud-adapted versions and Virtual Data Center solutions make the implementation of a disaster recovery scenario and data protection much easier for business in comparison with the technologies we have just five years ago. What only corporations had in the past is available now to midi and even to small businesses at reasonable cost. According to Antony Adshead of Computer Weekly the new technologies are simply “removing excuses for not doing eisaster recovery protection”.

With the technologies that are available to help speed, simplify and lower the cost of disaster recovery protection, your excuses for not having a plan are falling away.

There are many business entity that recklessly ignore the necessity of implementing a business continuity and data recovery scenario.

Mr. Adshead says that “looking back to only five years… it was common to come across many businesses without disaster recovery plans or provision”.  According to him, there are strong push factors that drive organizations in direction of disaster recovery planning.

The economy in any country becoming more digitalized year after year. Therefore many governments impose laws that create legal and regulatory compliance for many business niches. Financial services for example, having been very regulated in terms of data protection. The national financial services regulatory bodies set standards for disaster recovery in the financial service sector.

However, the good companies do not need to wait on any government to impose regulations on them in terms of data protection. Any financial services provider needs a failover scenario which allows operation to be seamlessly moved to a redundant data center site in case of failure of the main systems. Those companies need to use proven fault-tolerant technology infrastructure as when it comes to trading seconds or even milliseconds usually worth a fortune. If a trader or any other financial service operator cannot resume operations very, very fast after any disruption, it would loose not just money, but also its reputation.

Any possible loss of data, a lot of money and the subsequent damage to reputation should be the bottom lines for any company to come up with disaster recovery protection plan. In real life, the statistics shows that the loss of data is not just a theoretical risk. The are businesses that have never managed to recover after a IT disaster, while those recover somehow suffer a heavy loss of customers.

The good news is that nowadays a disaster recovery plan is quite easy to implement. It is not that costly and the new enterprise virtualization technologies and the Cloud-based solutions lower the management effort.

Disaster Recovery Plan Through Virtualization

Should one start working on a disaster recovery project, the first thing to consider is a virtualized business continuity setup through any server virtualization technology. Operating systems, applications and anything else is not tied to physical, bare-metal servers or even to stand-alone servers, anymore.

The use of a virtualization technology means that there is a hypervisor between applications and the bare-metal computing devices. Virtualization created standardizes the IT environment and businesses should not worry very much about the hardware specification of the underlying physical machines as long as they are the same generation and are compatible with the requirements of the used virtualization technology.

VMware Enterprise virtualization, for example, comes with a service called Automate Fault-Tolerance, which does real time mirroring of any virtual instance and should the primary instance fails, the failover scenario allows for the redundant virtual server to take on without any outage.

Business Continuity On The Cloud

Wit the Cloud, organizations do not even need to purchase expensive hardware or perpetual licenses for any virtualization technology that offers them a zero downtime business continuity scenario. VMware’s Automated Fault-Tolerance, for example, is affordable on the Cloud and could be used even by any small businesses, which would create and implement a failover scenario which that costs no more than few hundreds of dollars per month.

The Cloud Disaster Recovery (Cloud DR) allows businesses to use computing resources housed in any data center and delivered though Internet. Any local, office data can be transferred through a secure virtual private network to the data center (to the Cloud). It would be also a pure Cloud.

Firms would choose to use Cloud-based personal computers and to do all the computing work on the Cloud, inside any data center. In this scenario, the IT service provider, the data center is solely responsible for securing the data and all the organization should do is to keep a hard, off-site, archive copy of its data.

Depending on the virtualization and Cloud computing technology in place, the organization might experience short interruption of services in case of any incident or none at all. Companies’ Chief Financial Officers would also like the remote use of computing resources because those the computer Infrastructure-as-a-Service model is transparent from accounting perspective.

Germany’s United Internet bought web hosting company Strato, owned by Deutsche Telekom. According to the reports in the press, United Internet paid for around €600 million ($629 million) in cash. It takes on 2 million customer contracts and approximately €130 million in annual revenue. This another step toward consolidation of the European web hosting market. The company already owns popular web hosting, domain registration and online service brands 1 & 1, FastHosts, InternetX, Sedo, Web.de, GMX, Mail.com, etc.

Ralph Dommermuth, Chief Executive of United Internet explained that the acquisition of Strato will make possible for his company to expand its position on the “European hosting and Cloud application business”. He also added that such deal “drives the consolidation of a market which is currently still strongly fragmented”.

The deal is backed by a private equity group Warburg Pincus, which values Strato at 12.4 times earnings before interest, tax, depreciation and amortization. According to experts this valuation is in line with the with the multiple that American web host Godaddy paid to acquire Host Europe Group (HEG).

United Internet has also been interested to buy HEG, but eventually switched to the Strato deal. According Reuters the equity group Warburg plans to inject additional €50 million into the business applications holding as part of the Strato deal. United Internet’s owns popular web hosting provider 1&1, which is among the biggest in Europe and within the last 5 years has significantly grown its business in the Unites States.

What Are Stato’s New Owner’s Expectations?

United Internet hopes that it will be able to attract small and medium German businesses to as clients by selling them various services – from websites, e-commerce solutions, CRM apps, security services and etc. The company is also looking forward to continue its web hosting acquisition business in order to consolidate the European It Hosting market and to.

Is European Web Hosting Industry Going The American Way?

More or less, “yes” as private equity firms and other type of investment ventures poor money to web hosting in order to increase “market consolidation”, which actually means to increase the market share for some big web hosting providers and to reduce the market significance of medium and small business IT hosting and Service providers. Investment funds and financial capital groups applies formula which multiplies the value of each business based on its size and customer base.

The bigger the entity is, the better chance financial groups have to launch IPO and to sell overestimated  shared on the stock exchange. Such approach to business proves to be profitable in a short run. However, when it comes to technology part of the business to the IT infrastructure services and Cloud service delivery, it possesses certain risk. It is all about decision making. After such acquisitions, the decisions are made by CTO’s and professionals who usually have a little to do with the management of IT businesses and processes. The investors and stockholders are always eager to return on their investments. As a result of that the companies increase pricing and the IT management is very often pushed to change the procedures and to impose restrictions and service terms which would make customers to increase their IT spendings.

In European Union, a single market on paper, that comprises of 28 national markets, 28 national languages and various business standards and cultures, it’s very costly and sometimes virtually impossible to apply common procedures and organizational standards which would create a successful universal IT service model. So it is very likely that any “consolidation” of the European web hosting industry to be just a short-lived and unsuccessful attempt to apply the American business practices into the European business environment.

The United Internet’s website does not differentiate from the finest traditions of corporate culture – to produce self-sufficient structures which are focused mostly on bragging about their own success. “With its clear focus on the growth markets internet access and cloud computing, United Internet is ideally placed to benefit from the expected market growth”, says the company’s website.

A Cloud Hosting Debate

Posted by hosttycoon On January - 4 - 2010

cloud-hosting-debate-b10wh“How do you understand “Cloud Hosting”? What kind of infrastructure and platform do you imagine when someone mentions Cloud hosting?”, asked in the popular web hosting forum Hosting Discussion a member named HostColor. The forum user suggested 6 “fields” to be filled with answers: Operating system; Virtualization; Software; Network; Data center; Other features.

“You should include instances in your list also”, responded Conor Treacy a “Community Advisor” at HostingDiscussion board. “Remember a TRUE REDUNDANT cloud will be in multiple data centers.  For me, I see too many hosting companies attempting to run their own cloud, or offer cloud hosting, and operate out of a single data center facility. Yes it likely does satisfy the requirements to be “cloud” but really, the purpose is to have instances in various parts of the world to serve the data faster”, added Conor who also said that it costs more to do this, but “when you’re dealing with enterprise sites, you get what you pay for”. He mentioned that he does not pretend to know all about the cloud. “It’s too new and seems to be more “concepts” to many places than anything else”, said Conor.

An user with a name XeHost posted that “The cloud sounds great in theory but to implement proper cloud hosting infrastructure is very expensive”.

“I shall disagree that a true redundant Cloud shall be in multiple data centers”, responded HostColor and added that this is only an option. The user who opened the thread said that “operating infrastructure in different data centers is a different concept – CDN, something which according to the user, businesses did many years before the concept of Cloud computing to emerge. “If you use global redundant network for Cloud hosting service, you don’t need to have infrastructure in different physical locations, unless you really need some kind of localization similar to Google local search. If you are service provider, you do not need this”, said the user.

Can you define “global redundant network”? If you do not need to have data in different locations, if a data center goes offline (like they do – it’s not UNCOMMON), how does the data stay active for viewers on the web? Doesn’t the data need to be replicated to an outside machine SOMEWHERE?”, was the Conor’s response.

“I’ll throw my hat in the ring here”, said a HostingDiscussion user named “Bmdub”. HE said that he has been in the hosting business for over 6 years. “I’d compare cloud hosting with the shared hosting methodology of the late 1990’s to early 2000’s. Today however, Cloud Computing has become a much different animal. There are higher levels of security, performance and manageability that are defining what cloud computing truly should be and is becoming right now”, explained the forum member and summarized his understanding of Cloud hosting in 6 key points.

1. OS: I really think the OS selection is based on the capabilities of the provider and their ability to support those needs with experts. In my mind, Cloud Computing should offer both Microsoft and Linux based operating systems.

2. Virtualization: This is a piece of the puzzle. Right now, VMWARE, Citrix and Parallels are the only companies providing what I’d say is an easy to deploy platform to offer a scalable and secure computing platform. In the future, the underlying virtualization technology will matter less when API’s and customization become more prevalent. At this moment, I’d say that Citrix and VMWare will dominate for quite some time because of their financial capabilities and their general acceptance as reliable products. Although Microsoft and Google will have something to say about that.

3. Software: Id say any development platform should be built to live in a multi-tenant configuration and can easily scale across multiple processors.

4. Network: This is a big thing and the cloud most certainly should have more than 1 Tier-1 (Verizon,ATT, Level3) provider connected to it. As someone mentioned earlier, geo-diversity- or federated cloud- will build a truly resilient network for maximum uptime. Look for this from hosting.com in 2010.

5. Datacenter: Tier3 or better data center. Multiple carrier access, N+1 or better power and cooling. 24x7x365 support.

6. Other features: Well, API support, geographic load balancing, easy to use customer interface (Self Service).

A meaningless post, I believe followed, in which a user said “Some hosting companies claim that there are using a Cloud Hosting structure. But sometimes… it isn’t”.

Here came HostColor again to respond to Conor ans said “I’ll give an example. Having a good and stable connections with 2 or 3 major U.S. carriers + NTT and another one to Asia and 2 more to Europe… will be enough to say you have a “global redundant network”.”

Conor responded by saying “So “global redundant network” is not the same as a “global redundant site” then. You’re just looking at multiple carriers for the data. If the data center goes offline (network issue, power issue, someone trips over the power cord (ahem – rackspace), or the electric room catches on fire (ahem – the planet), or the basement is flooded (uhh.. can’t remember the datacenter, but it was in Chicago) – so those items don’t necessarily play into the roll of a redundant NETWORK – these relate to the SITE in particular”.

He also said that for him the idea of redundant means a multi-location site where if someone’s websites go offline at one place, they will be up in another. “This is what has been broadcasted on a number of places offering cloud and how stable and superior Cloud really is. Where in fact it’s nothing more than shared hosting with the ability to increase processing power, disk space, memory etc all on the fly”, added Conor Treacy who represents a company named “Hands On Web Hosting“.

A HD use from UK’s web hosting provider CSN-UK.net joined the discussion. He said that the concept of cloud computing isn’t new considering the concept goes back as far as the 60’s, though the way in which it is being used by providers. “The whole point of cloud computing is from a hosting standpoint to provide speed, stability and redundancy across as wide an area as possible in order to increase the benefit for the potential client base, done by virtualization”, posted CSN’s representative.

“However brining in the point that Conor made, the whole point of a cloud network is to provide a redundant network across multiple locations in order to avoid many of the problems of traditional systems and combine them with the benefits of the similar VPS technologies. Otherwise the effectiveness of the cloud within a single datacentre is simply to provide an expandable VPS solution mirrored across multiple machines as essentially it would have similar redundancy for many of the issues that cause us as providers downtime”, added the HD member.

He explained that the use of multiple transit providers does little to nothing to provide redundancy if a primary switch on the network has a malfunction for example or any of the examples provided above, as such the virtualisation layer of the cloud network ensures that the data is mirrored across multiple sites and an alternate site would take or share the load with other sites in order for the users site to remain available and unaffected by the malfunction or natural disaster. “Where my knowledge is lacking is the information from scripts that are held in RAM or being processed which could lead to corruption, though there are a number of solutions I’m yet to read that in-depth to any particular approach”, said the CSN-UK.

“Sure! There’s only one thing that I would like to point out and it is that having infrastructure and redundant network across multiple locations IS NOT part of the “Cloud” concept. However I shall admit that if a company operates 2 or more facilities in a CDN, which is part of a cloud platform and/or service is something that shall be appreciated from its customers”, said HostColor, a user who represents a quite popular web hosting company Host Color.

This is the last post to the thread “Your Notion Of Cloud Hosting?“. Follow the link to see how does it continue and what do other HD members think about Cloud hosting. To find reviews about Cloud Hosting Providers visit CloudHostingList.com.

Plesk and Parallels Under Fire?

Posted by hosttycoon On November - 25 - 2009

parallels-plesk-panel-under-fire“After encounter an issue that the server can’t reboot into safe mode”, posted today forum member “Onemancrew” in Web Hosting Talk. He explained that the bug appears when the he tried to reboot his server into safe mode. After the login screen was shown , after a few seconds automatic reboot would be done, and “no matter what you will do , you will not be able to boot your server into safe mode”.

“Yes, if you wonder your self , after you install Plesk Control Panel over Windows Server 2003 you can forget your safe mode” adds “Onemancrew” and also says that the bug exist from version 7.6 until version 8.6.

The WTH member has posted in the forum thread that he hasn’t checked version 9.2 yet. But he added that he was “99.9% sure that this bug also exist in 9.2 version of Plesk Control Panel”. The bug in this version according to “Onemancrew” is that when installing Plesk Control Panel on Windows Server 2003 the control panel user can not reboot their server in Safe Mode. He also says that the OS will make automatic reboot after showing the login screen.

“Parallels doesn’t worth your money. They don’t care about customer, all they care is about releasing more and more products but the word “quality” is unknown for Parallels developers”, states “Onemancrew” and makes the suggestion “Don’t use Plesk Control Panel” and provides the argument that “If application make the platform to stop working correctly then such software need to be abandoned”.

He has also created a new post shortly after the original thread was being opened and said that the ridiculous matter here was that Parallels demanded to get money for opening support ticket despite the fact that this is 100% Plesk bug. According to him the software producer’s policy is “to get money even about bug fixes”.

“It’s absolutely ridiculous. This is not how a software company need to treat existing customers”, said “Onemancrew”.

What did WHT members respond?

“This is the reason I switched to cPanel, I was having issues left and right with Plesk and their support team was never very helpful or I had to pay out the ass to get them to just look at it. cPanel support is superior for any software company that I’ve come across”, said Canadian member of Web Hosting Talk named “Certis”.

Here comes a WHT member with a nickname “Drew_Parallels” who responded thathe wanted to let the community know that he saw the thread and alerted Parallels development team. “They’re investigating the bug now. As a possible (though not amazing) workaround, would be to set the Plesk Management Service (plesksrv) to Manual or Disable mode. Then you should be able to start up in Safe Mode. Unfortunately, Plesk has to be started manually after that. I’ll let you know as soon as the developers get back to me”, says “Drew_Parallels” and ads “Sorry about the bug”!

A few members of WHT who joined the thread thanked to Drew and showed their understanding.

The thread continues with “Onemancrew” who posted “The BIG question is WHEN?”. He adds that he is asking the right question here because Drew wrote that the Parallels developing team was working about a bug fix.

“But again, the question is when the bug fix will be released for the public ?  It’s ridiculous that such BUG exist for 3 years! And until now the QA team didn’t find it. What does it mean that QA team didn’t find such a bug?  It’s means that SWsoft doesn’t have any QA team.”

“The word QA is unknown at SWsoft Company” (SWsoft is the old name of Parallels), wrote “Onemancrew”.

Another WHT member “Dynamicnet” addressed Drew and said that he posted his questions. “Like the post above, it has to do with QA (not only in software development, but also in writing KB articles). For example, http://kb.parallels.com/en/6656 could not have gone through quality assurance. Aside from easy to catch spelling errors, there are 21 hard coded patch files without any documentation”, writes “Dynamicnet”.

“Say you want to upgrade from Clam Anti-virus 0.95.2 to 0.95.3 using http://kb.parallels.com/en/6656 you have to contend with /src/hsphere-clamav.patch which is hard coded for 0.95.2 without any instructions on how to recreate the file for use with 0.95.3. Similarly there are 20 other hard coded patch files blocking the way of other upgrades making http://kb.parallels.com/en/6656 worthless. Furthermore, http://kb.parallels.com/en/6656 makes Parallels look bad because it is yet another public proof Parallels does not take quality assurance seriously”, added the WHT member.

Follow the whole thread “The Bigest BUG inside Plesk Control Panel , The Bigest BUG that SWsoft/Parallels Want” in Web Hosting Talk.

Is Your ISP Ripping You Off?

Posted by kevin On November - 18 - 2009

internet-service-providerIs your ISP ripping you off? Recently, my ISP decided to go the route of setting a download limit for all their data accounts. Unfortunately, this is a trend that seems to be on the rise. Many of the major ISP’s in the US are now limiting the download limit, and their justification for doing so is really quite odd. The most common points that are brought up about the download cap are:

1 – Few “normal” customers will have to worry about coming close to the limit.
2 – It helps ensure that the network is not overloaded.
3 – Reduces the likelihood of customers using file sharing services.
4 – Allows the ISP to easily manage the number of connections to a hub.

Now don’t get me wrong, I’m all for companies making a profit from a service that they provide, but in my case, the cap is an extremely difficult thing to have to work with. I’m lucky enough to have a very good relationship with one of the leads for my ISP. He’s a great guy that is trying very hard to defend a policy that is really indefensible. You see, my ISP just bumped up the download speed available on my current tier to 25mbps. Now that’s a pretty fast connection, but they even offer a 60mbps connection! Of course the price for the 60mb connection is almost $100 per month. Add to that, the bandwidth cap was only raised 50gb per month and you begin to see the point. The odd part is that they did not increase the download cap limit after increasing the speed.

I called my contact immediately after receiving my first warning of coming close to the download cap. I explained to him about working from home, and some of the basics that I use my connection for. My next question to him was to ask if there was an unlimited service plan that I could subscribe to. He replied that there wasn’t, and probably never will be. We tossed around many different ideas including the possibility of using a business package, but even those are not unlimited. Finally settled on a potential solution of putting in a second modem and bridging the connections to form one. This would in theory double my download limit as well as double my available speed. Unfortunately, it also means doubling my current bill.

This type of solution is not by any stretch of the imagination ideal. Unfortunately, I’ve been unable to even see if it’s possible because the line strength of the cable coming into my home is not strong enough to support 2 modems, and so I’m forced to wait until they can send out a tech to drop a second line into the house.

Now all that being said, the most irritating part of this entire situation is the fact that there is no other viable alternative for me to switch to. Sure there is a DSL provider, but the best speed they can offer is a 7mb connection and it is also capped.

All this got me to thinking about why an ISP would choose to implement such a policy, especially considering the fact that they maintain that less than 1% of their entire customer base comes close to their designated limit. Now in my particular case, they charge $1.50 per GB downloaded past the limit. Granted this might not be such a huge hardship if you only go over a few GB, but if you were to go over 35GB, that would add an additional $50 to the bill.

There are all sorts of services available to those that have broadband connections. Netflix has the option available for instant streaming of available videos. This option alone is quite interesting, especially if you choose the 1080p version.  Each 1080p movie will run anywhere between 27 – 40gb per movie! That’s almost half of my allotted quota per month for just one movie! Then you have streaming internet radio services, and streaming video.

As evidenced by past leaps forward, there are sure to be more things that will become an integral part of our online lives. Those things are almost certain to utilize a broadband connection like never before. What happens then? Will a class action lawsuit be required in order to force ISP’s to retool their packages? What ever happened to the United States being the most technologically advanced country in the world? In the Netherlands for example, virtually every household is wired directly with fiber! No download limits, no caps, and blazing speed. What about getting the government involved?  Some kind of overlord to bring the various ISP’s in line?

There are so many facets to this issue, none of which make much of a difference until the customer has some kind of leverage to make their ISP’s listen. I fervently hope that day is not far in the future.