B10WH

web hosting media

Archive for the ‘Have You Say?’ Category

Who’s Moving Linux Ahead

Posted by hosttycoon On August - 23 - 2009

linux-foundation“How fast Linux is going?”, “Who is doing it?, “What developers are doing?”, and “Who is sponsoring it”. The answers of these an many other questions can be found in the latest report released by The Linux Foundation. It provides details about the development of the kernel which forms the core of the Linux OS. The foundation said that the development of the Linux kernel is a “result of one of the largest cooperative software projects ever attempted”. According to the Report, the regular kernel development releases deliver stable updates to Linux users, each with significant new features, added device support, and improved performance. “The rate of change in the kernel is high and increasing, with over 10,000 patches going into each recent kernel release. These releases each contain the work of over 1,000 developers representing around 200 corporations” says the Report.

Since 2005, more than 5,000 developers from nearly 500 different companies contributed to the Linux kernel. The kernel, thus, has become a common resource developed on a massive scale by companies which are fierce competitors in other areas. A number of changes have been noted since this paper was first published in 2008. One of them is that a 10% increase in the number of developers contributing to each kernel release cycle. The rate of change increased significantly and the number of lines of code added to the kernel each day tripled. Since 2005 the kernel code base has grown by over 2.7 million lines. For the last 4 years Linux established a robust development community which “continues to grow both in size and in productivity”, as “The Linux Foundation” reported.

The Linux kernel is the lowest software level which runs on a Linux OS. It is used to manage the hardware, to run user programs, and to maintain the overall security and integrity of the whole operating system. This is the kernel that, after its release by Linus Torvalds in 1991, inspired millions to join the development of Linux OS and applications for Linux as a whole. The kernel is a relatively small part of the software on a full Linux system. Many other large components of the OS come from the GNU project, the GNOME and KDE desktop projects, the X.org project, and etc. But the kernel is the core of the OS which  determines how well the Operating system works and is the piece which is truly unique to Linux.

The Linux kernel is one of the largest individual components on almost any Linux OS. It features one of the fastest-moving development processes and involves more developers than any other Open Source project. The Linux Foundation said  that since 2005, kernel development history is well documented, thanks to the use of the Git source code management system.

The Linux Kernel Development

The Linux OS kernel is developing by the community on a loose, time-based model. New major kernel releases occur every 2 to 3 months. The model has been formalized in 2005. It works fine for the Linux users because they get all new features into the main kernel with a minimum of delay. The model is based on the concept that the pace of  kernel development should be as fast as it is possible. Another advantage is that the distributors of the Linux OS shall apply a minimum number of external changes.

A significant change in the most recent release of the kernel that Linux Foundation reported in its latest paper is the establishment of the linux-next tree.

“Linux-next serves as a staging area for the next kernel development cycle; as of this writing, 2.6.31 is in the stabilization phase, so linux-next contains changes intended for 2.6.32. This repository gives developers a better view of which changes are coming in the future and helps them to ensure that there will be a minimum of integration problems when the next development cycle begins. Linux-next smooths out the development cycle, helping it to scale to higher rates of change”, says the Linux Foundation report.

It also explains that after each mainline 2.6 release, the kernel’s “stable team” – currently made up of Greg Kroah-Hartman and Chris Wright – takes up short-term maintenance to apply important fixes. The stable process ensures that important fixes are made available to distributors and users and also that they will be part of the next major releases. According to the foundation the stable maintenance period lasts at least 1 development cycle and. However for specific kernel releases it can go significantly longer.

Who Is Moving Linux Ahead?

The numbers show that 18.2% of Linux OS is written by people who aren’t working for any company, and 7.6% is created by programmers who don’t affiliate their contribution with any business entity. Others who write Linux are paid to contribute to the OS. Here are some of the companies which contributed more than 1% of the current Linux kernel: Red Hat: 12.3%; IBM: 7.6%; Novell: 7.6%; Intel: 5.3%’ Independent consultant: 2.5%; Oracle: 2.4%; Linux Foundation: 1.6%; SGI 1.6%; Parallels 1.3%; Renesas Technology, amd etc.

More information can be found in the Linux Foundation’s report – “Linux Kernel Development“.

Top 10 Shared Hosting Providers – The Impossible Rank?

Posted by hosttycoon On July - 8 - 2009

best-web-hosting-providersThe most stupid question in web hosting industry probably is the one like “Which Are Top 10 Shared Hosting Providers?”. It is so stupid, that actually there is no answer. But thousands of money-making chasers give a birth “Top Hosting Directory” every single day. And that’s why there are a bunch of top or best web hosting site and directories out there. Some of them even receive high ranking in search engine result pages and bring thousands of dollars to their owners.

B10WH was there 4 years ago. But we decided to change the concept. At the same time we still pay attention to those who spend a loft of time to build the next top web hosting list. Today in “Have You Say” we will target your attention to a discussion in web hosting talk titled “Top 10 Shared Hosting Providers – What Are Your Thoughts”.

“Hello. I am new to WHT and a grad student at University of Texas. My purpose for joining is to gain research for a thesis I am doing titled “Internet – Where it all begins?” My focus is on the hosting provider side of the industry considering it is a $19 Billion dollar industry” says the WHT user “HostingResearcher” in a thread. He explains that the research that he found has lead him many different directions.

“There are many sites to tell you who the top 10 providers are within the industry, however they provide no background data on why”.

“It seems there are many website that tell you who the Top 10 providers are within the industry, however they provide no background data on why”, says the forum member. He also explains that there are some sights that provide user reviews, “however small sample data”. “This need for data brings me to you all” posts HostingResearcher and asks the questions “Who is the best shared hosting company?”, and “What makes them the best?”.

He also asks Web Hosting Talk members to help him to gather this data by responding via this quick survey. HostingResearcher calls WHT members to rate between 1 and 5 the web hosts’ – Service Offerings, Pricing, Customer Service, Market Perception, Experience, Quality.

What have you said?

“There are no “best” hosts. The top sites are mostly affiliate payout sites, pretty worthless what’s best for me may be terrible for you and vice versa – it all depends on what you find best and how the host works with you and/or for you”, wrote “njoker555”.

“There must be some consistency from one provider to the next. It seems that most of the shared hosting providers offer similar packages for space, control panel, bandwidth, support, and pricing. If you had to purchase a shared hosting package today, then who would you look at first?”, responds the “HostingResearcher”.

Another WHT member “SoftsysHosting-Rick” says that most of the top 10 that you’ll be looking at will be the hosts who pay good sum of $$ for getting up in the list. “However, as njoker mentioned, it all depends on your usage and it will be really difficult for you to come up with a list of say top 10 or top 20 hosts – the reason is simply that top 10 hosts differ from requirements to requirements”. ads the forum member.

“Hence, what I’d suggest you to do is to first gather different kind of requirements amongst customers and thereafter come up with top 10/20 hosts for each specific requirements. These requirements can be in terms of platform of hosting (Linux/Windows), type of hosting (shared/reseller/vps/dedicated), kind of hosting (website hosting/database hosting/email hosting/backup/all), etc. I believe, you will need to do quite a good amount of research and thereafter come up with your requirements after which you should be looking at customer inputs for their best host matching specific requirements. Fortunately, you are at the right place and you will get good amount of data/help from folks here”,  explains “SoftsysHosting-Rick”.

“CodyRo” a forum member with 478 posts (at the time when they joined the conversation) said that it’s going to be very tough to make a conclusive list.

“For instance you have some smaller companies that people are very happy with, but their customer base is not nearly as far reaching as the larger hosts. As a result it’s going to be very difficult to get a consistent answer – especially on forums such as these where the small / medium sized hosts often have a decent following. Also things such as quality are merely perceptions and opinion. It’s going to be difficult to gather enough hard data based on that criteria”, explains CodyRo.

Ldcdc (Dan of WHReviews), on of the most fanatical WHT members with status “Community Liaison 2.0” who has 17,997 posts in Web Hosting Talk said it would be quite fantastic indeed and commented that anyone who wants to categorize the top share hosting providers needs may have to do their own research using the forum and some of the so called “top web hosting directories”. “How accurate it will all be, would end up being an endless debate. I know it for a fact that on one of them, some of the reviews were planted by the host itself, so the defense systems of these sites (those that actually have such) are not perfect. Then again, what is perfect in this world”, writes Ldcdc.

The next poster “NetDistrict” adds that he doesn’t think that people would need to look at how large a any web hosting company is. “This is less important to what the hosting company can offer the customer the quality, support options and prices of the hosting company”, posts NetDistrict.

The last in the line Host Color posts that they think that the price of web hosting services should be explained and the prospective customers must know what they pay for.

“There are companies that sell cheap, but don’t use redundant network or even a SLA. Others are very stable in terms of network and facilities but their support is not that good. From my point of view the quality of customer service is crucial in our industry. The communication with customers must be well organized and automated. However a high level of automatization is not just about buying software. It depends of how any web host manages different processes”, ads HostColor.

Thread is still open – http://www.webhostingtalk.com/showthread.php?t=873969. You may join but, following the discussion and posting meaningful is a must

WHT Members Say: What Is Cloud Computing?

Posted by hosttycoon On May - 5 - 2009

cloud-computing“I think a good way to start off this forum section is to determine what cloud computing means to everyone, I know there are a lot of different views and it should be interesting to see them.”, says HP-Kevin in a thread titled “What is Cloud computing?” in WHT. Members named “Karbon” adds that he is “curious as to what is it as well” and ads the he never have heard of it. “Is it where you host in the clouds?” writes the WHT member and adds “joke, don’t yell at me”. This are only one of many examples that show that the average web user is still unfamiliar with the concept of cloud computing and cloud hosting in particular. Wikipedia offers an explanation of the term “Cloud Computing”. Web Hosting Talk also has a detailed explanation in its Wiki section. So we will not go over the “Cloud”. Most important is to see what people think of the Cloud computing.

“Interested as well in knowing what cloud hosting is and does”, this is another Cloud newbie post in WHT. As always happens the first one who know more on the topic emerges, this time on the 4th place. His nickname in WHT is Plutomic-Andrew and he offers an explanation about “Cloud”.

Cloud is…

“Cloud hosting, in one form or another, is the clustering of multiple physical hardware nodes together to act as a single server, with nearly unlimited resources since it can be continually added to seamlessly without adversely affecting the applications running on the cloud currently” says Plutomic-Andrew. He adds that such a single cluster or grid is then broken down into individual VEs or Virtual Environments.

“Each VE is a self-contained LAMP stack running on top of any OS the customer would like while having access to the computing power of multiple processor and multiple GBs of RAM to perform its computing tasks. In most cases depending on the architecture of the cloud, each VE can expand dynamically to withstand the influx of heavy traffic or in other cases storage demands. These demands are typically caused by a site being dugg, slashdotted or in one way or another gaining more exposure than they would under typical daily circumstances”, writes Plutomic-Andrew and becomes the first to know something about Cloud computing. However it is very tech explanation and most forums users probably lost themselves on the 3rd row.

A proof of this is the Karbons remark – “Ah, I feel stupid now. I should of known what it was. It’s basically a server cluster”.

“Its really exciting I cant wait to try it out for my new project, but I do have some doughts. He adds that on his course he was taught that “the cloud will be when all applications and data are stored on the web and we simply connect using “terminal Pcs” in effcet moving backwards”.

… a Buzzword

“To some ‘the cloud’ is the answer to everything – to others just an overhyped buzzword… We (UK2group) takes it very seriously, and I see this taking over most of the dedicated server market within the coming 5 years. It is very far from mature though – one of those things everyone talks about but really only a handfull actually provide. Buzzword or not, it is bound to change some of the mechanics of the hosting industry”, says WHT member with nickname “eming” and becomes the first self promoter in the thread who underlines that his company takes Cloud computing seriously. He how ever provides a URL ( to some ‘the cloud’ is the answer to everything – to others just an overhyped buzzword… We (UK2group) takes it very seriously, and I see this taking over most of the dedicated server market within the coming 5 years. It is very far from mature though – one of those things everyone talks about but really only a handfull actually provide. Buzzword or not, it is bound to change some of the mechanics of the hosting industry. EDIT: take a moment and read this article published today from Wall Street Journal, http://online.wsj.com/article/SB123802623665542725.html – it will help give a better understanding… ) to Wall Street Journal’s article that should clear the air around the clouds, according to him.

“Thats a great explaination”, posts WHT member named himself “you86” and ask the questions “Will all these hardware nodes in queue? What will happen if one/two will broken down? everything will stop working?”.

It is just marketing…?

HP-Kevin who has opened the thread about Cloud computing responds to Andrew’s Cloud hosting explanation and asks him “Can you name some/any company out there that can provide you a VE with a “self-contained LAMP stack” with nearly unlimited scalability? Or even scalability beyond the resources of a single physical hardware node?”.

“What you have explained is what I find many people expect from “cloud hosting”, but I disagree, and I am not sure the functionality you have described yet exists. So far I am with Tim on this, and its just marketing”, posts HP-Kevin.

An example of a Cloud

Plutomic-Andrew responds to HP-Kevin by saying that “Most all cloud computing environments will allow you to use more than one physical piece or hardware for computing power”. He gives an example with AppLogic of 3tera., and says to “you86” that depending on the architecture of the Cloud the services can be re-provisioned and started on other physical equipment in the cloud.

“True, but AFAIK even applogic does not allow you to scale one wm across more than one physical hypervisor”, responds user “eming” – Ditlev Bredahl, CEO, uk2Group.com.

“There was a good article in the Financial Times last Thursday about cloud computing… it’s definitely picking up momentum”, posts “dazmanultra”. “Jonrdouglas” says that hybrid clouds let anyone to run “what is generally code that runs only on Windows, in the same directory as your files you generally run on Linx What”.

According to “HostedFTP” it is a huge market and the expectations are to be a $100 billion industry within the next 10 years. “Amazon AWS S3 amd Ec2 is the major player at this time. Many companies are already moving into the cloud as it is very inexpensive to host”, says HostedFTP.

“I am involved in a SaaS based software project that my partners and I are hoping will grow exponentially for us over the next few years. We are working on some contracts now and I really think that it will be in my best benefit to launch our software on a cloud so that we can quickly and easily grow as needed”, says VertexBilly and ads that he has just contacted 3tera to get their sales info and learn more about what they can do “to help us launch this”. “I am not sure if we are ready for this infrastructure and cost yet (as we would do our own cloud in house) but I really do think that for companies that are working on a similar SaaS type business model as us that a cloud based infrastructure is probably the way to go and not just hype”, explains the WHTmember.

… Waste of time + Extra Costs

Nathaniel which forum name is “logikstudios” posts that his personal thoughts on the cloud are potentially a waste of time on a major skill. “Look at it like this. Look at all the current providers out there. They are WAY! to expensive for 95% of the users and you can’t really benefit the use of them. I was looking at moving a project to the cloud and thought to myself, to get the same type of power we require now was going to cost us £250+ extra a month(approx 2.5 x the amount). I can see websites/companys possibly having there own mini clouds setup, but interms of of processing power and storage (nothing more really than a super server with a DAS), its going to have to come down alot! before it really kicks off i think personally, maybe around VPS pricing”, says Nathaniel.

“I would class cloud computing as basically making your documents available to you anywehere, microsoft is really trying to get the lead on this by launching live services like Azure and Live Skydrive, profile and whatever else”, responds “FortressDewey” and adds that if he needs to view a file he just emails it to himself”.

A “Web Hosting Talk” forum member with nickname “hwmcneill” says in his first post that he have just come across the thread which he found interesting and he would like to comment on some points made.

“AmirKhan” raises the point that it is a platform for the “no software” approach. This, as far I am concerned is correct. In a sense the two processes have occurred in parallel thus creating the virtual cloud concept. “You86” asked about what happens when a node breaks down and I might add runs into capacity problems. These instances are handled by normal overload chaining to other live servers. In the case of breakdown, that is the autochaining is incapacitated, the use of virtual client technology (VCT) can have the browser detect non-response and switch to a predetermined priority list of servers. Thus even if the “central server” fails the VCT component keeps operations running with a gap in operations equivalent to less than 1 sec (depending upopn sys bandwidth).

“HostedFTP” mentions that clouds are expensive to host. Well each node or server can be hosted like any other. The configuration is in the software so a cloud can be made up of servers located anywhere in the world. Sharing such resources and making use of online applications rather than multiple redundancy sw at the client end saves an enormous amount of money in outlays. And any updates at the nodes affect all users making “roll outs” extremely low cost. Logikstudios raises the point of high costs but I am not sure what these refer to so I cant comment. Certainly higher users make the whole thing cheap and so the cost issue can be an issue of not having enough users to justify the initial outlays. “DHD-Chris” mentions clouds as being a facilitation of global access to docs etc. Yes, this is also part of the support functions i.e. data processing, transmission and storage with web document rendering being essentially a dialogue in a browser combining storage access and transmission”,posts hwmcneill. He adds that this is just his initial response to thread and he is going to rustle up some notes to provide a follow up on this. He also says that he will provide some examples of his own experience in Cloud computing field.

… Concept Still Unclear?

“The concept is still somehow unclear for me”, writes “HSNM”. “What I can see is a matter of distributed computing that handles the problem of connecting different pieces together to let a single service run in an abstract way”, say the forum member and explains that Google calendar can be an example which uses cloud computing. According to him “the Internet is a big cloud by itself”.

… Its About Distributing Servers And Loads Between Many Computers

“I guess one can look at it from several angles. For me its about distributing servers and loads between many computers and in some cases geographic locations. It adds ability to dynamically or at least quickly scale out. Allowing to handle spikes in server loads as well as adds a layer of redundancy”, comments “dariusf“. He adds that the problem he sees with it is two folds, one is the cost for larger requirements.

“It does cost more then getting part of a rack and installing a bunch of servers. It removes the layer of server management and support and this could be a very large cost of the operation when you have to maintain multiple servers, clusters, load balancers and other networking hardware. So perhaps the overall cost will be lower”, explains the WHT member. The biggest question he has at this point and he just didn’t have much time to investigate is the question of user session replication on the application layer between the virtual nodes.

“With at least ColdFusion one has the option to store user session specific information on the database and in this case every node in the cloud would be able to hit that database cloud and the session for the user. Its slow and not as elegant as doing something like Jini and buddy under Java, which ColdFusion being in essence java and running under the JVM can use. Doing session replication between the nodes with something like Jini would be the best solution but how does that work and what about dynamic new nodes added in the cloud?”

“Another question is the database replication. Does it get moved in to the cloud and then replicated between the nodes or does it stay out of the cloud and gets only hit by the cloud?”

“If it stays in the cloud, would one set a fixed number of nodes and then replicate between them or is there a way to dynamically replicate as nodes are created or removed based on the loads? Any ideas out there? Anyone has any experience with this”, asks the WHT member?

… Grid Computing Has Been A Hot Topic Some 5+ Years Ago

“So what is the ‘cloud’? What is the difference between a ‘cloud’ and the ‘grid’ and a ‘server farm’?”, ask “andria”. “Grid computing has been a hot topic some 5+ years ago. In my understanding, Grid consists of server farms located on different locations. Connected with each other, acting as one”, writes the WHT user.

“I can recall that around 2000 there have been big headlines each time when someone would set a new record for the biggest server farm. Most of them were set up in scientific field. With grid computing or a server farm, you throw a task into it and it gets distributed across the place. So basically, to me, it looks like a cloud hosting company is doing nothing different. They are running a number of virtualization tasks on a grid computer (or on a single server farm)”, posts “andria”.

According to this WHT member each virtualization task gets certain resources allocated and that’s it. “I assume for hosting company this solution is somewhat easier to handle, since all resources are seen as one and the whole load is also put together. There are no boundaries of a single server. On the customer side only those who need clusters would profit from this. A VPS customer could not care less if he is getting 1/20th of a single server or 1/20.000th of 1.000 servers. So, to me, the ‘cloud’ seems to be either a single server farm or a grid network of server farms”, concludes “andria” and asks “Or am I wrong? So why call it a cloud?”.

The Dominator – Cloud hosting exists!

The last word (May 5th, 11.45 am EST) has the WHT member with name “The Dominator”. “Cloud computing exists, but most end user customers don’t get it – coders have no idea how to code to use extra nodes on the cloud, and cloud computing is so vaguely defined – it depends who you talk to – we sell lots of dedicated servers direct through discussions with customers, and customers ask me is my server cloud computing?”, explains “The Dominator”.

He adds that in the last 2 years most cloud (grid) computing data centers had major outages. The user says that the “future is coming” and he expects computers to connect to some form of Clouds, but data centers to “take a bit longer to transition to grid models everywhere”.

Still Unclear?

If I had to make myself clear of what Cloud computing was, from the above conversation, I would loose myself. So folks, the best you can do, if you need to know what really cloud means is to spend a few hours reading in Wikipedia, WHT Wiki and other library resources. Then stop and think about what you have read. And if you still do not understand the “Cloud”, stop thinking about it and just use it.

PingZine – Quick Fact

Posted by hosttycoon On April - 28 - 2009

pingzinePing! Zine Web Hosting Magazine is one of the most influential media in hosting industry. I like it because it is quality and has a very nice design. Read what Keith Dunkan of PingZine said about the magazine.

“We print over 20,000 copies per issue, 85% shipped in US, 15% Canada, and overseas, we average 1.7 readers per copy bringing print readership to over 30,000 readers and have average 15,000 readers of the online version, bringing our estimated total to 45,000 readers per issue”.

PingZine is the longest running print magazine in web hosting industry. AS you can see from the above numbers it provides a great access to web hosting market and to the most influential decision-makers in the hosting industry. Reaching over 45,000 readers (in combination of print and online versions), Ping! Zine attracts audiences far beyond the traditional boundaries of host directories, portals, and forums.

virtualized-data-centersAndy Patrizio, blogger in Internet News published a very interesting article titled “Virtualized Servers: Less Work or More?” that suggests that the data center of the future might be “at least partially virtualized”,. He however says that the consolidation of hardware “does not mean less work”. According to Mr. Patrizio it is clear that a larger virtual data center infrastructure means also more hardware to be maintained.

One of the warnings sent to businesses during a session, part of IDC Directions ’09 conference was that “virtual servers still means more servers to maintain”.

Virtualized Data Centers To Lower Business Costs?

According to Michelle Bailey, a research vice president of IDC’s data center trends and strategies group, the accent on cutting costs is increasing annually. She told the participants of IDC’s Directions Conference 2009 that 40% of IT managers surveyed her company said that cost savings was their top priority.

During the last few months “virtualization” has become one of the most often pronounced spell in all IT markets and in web hosting industry in particular. Companies like Microsoft, Citrix, VMware, Parallels, and other   virtualization solution producers swear that by using their virtualization products companies can consolidate low-utilization servers, to increase productivity, and to cut costs for hardware and technology in general. However when it comes to full virtualization techniques, many analysts sya that the average number of Virtual Machines (virtual servers) per physical server is only 5-6.

IDC however says that found that even when businesses move from 5 virtual machines per server to 8, a new 100 million new physical servers still have to be deployed by 2012.

Automation Is A Key To Success

In his opening keynote at Parallels Summit 2009, Serguei Bellousov, the virtualization provider CEO said that automation is one of the most important processes which has to be implemented as much as as possible from any IT company.

IDC says that by 2012 the number of enterprise data centers is going to shrink from 77% to 65% of the total number of data centers. Virtualization technologies and the shift to outsourcing are key factors are pushing for this change.

IDC’s research papers say that the number data centers in which enterprise computing jobs are hosted and outsourced by companies, will grow from 9% to 16%. The smaller data centers that serve local markets and companies, will also grow from 14% to 19% in 2012.

The analysts suggest that companies that build and maintain data centers need to rethink and redesign the concept of running the IT storage facilities. Adding excess capacity should not be the main objective of the data center owners needs.  IDC says that “instead of building a 100,000 square-foot data center and using just 5,000 square feet, build it as a 5,000 square foot modular design, and add on capacity in small, repeatable increments”.