Monthly Archives: February 2014

Virtualisation – Doing More with more and paying less
– a beginners guide

The trend now is to do away with several servers and combine them in to one. It seems easy but many of the so called IT people don’t understand it so what chance do you? Well here is a simple guide. Ask your proposed installer and if they can’t make it this simple, don’t use them, use us instead!

A file server isn’t just a box that stores data for sharing, its much more than that and it can be even more than that too.

Let me explain all the tasks and Jobs that a server has to do:

Domain controller, it looks after all the computers and users, who does what, who can do what, who does what where and with what. It contains all the permissions, security rules and without it your network domain wont work. It holds all this in a thing called the Active Directory, like a phone book.

DHCP Controller When a computer, phone, tablet or anything joins the network it needs some information so it can connect with everything else. The DHCP controller does this and also limits the connection time.

File Server This is sharing the data files according to the rules of the Domain Controller and Active directory

That’s the simple stuff now comes the specialised heavyweight stuff:

SQL Server This is a special role for holding and handling data. Big fat files need special handling to keep them working quickly.

Exchange Server. This is the e-mail, diary and contacts server. Its what makes Outlook work in a network.

Remote Desktop Server aka Terminal Services. This takes some imagining. Imagine several desktop computers, now cram them into one box so that several people can connect remotely as if they were using a computer in the office.

In the past you needed three or four servers one for the simple stuff and one each for the heavier role. You could not put them all on one machine as they would fight with each other and the system would become unreliable.

Computers have come on a long way and are now much more powerful, a server still spends a lot of time doing nothing and a way has been developed to use this spare time without needing all the extra drives, power supplies, video cards etc. This is called virtaulization.

Imagine we take a big powerful computer which has 24Gb of memory and two network cards. Its going to be cheaper than four separate servers. Now divide it up so that there is the basic server which has 8Gb of memory reserved for it. Now take the rest of the memory plus a lump of hard disk, split it into two and treat these parts as servers, separate memory but sharing the same power supply, processor etc.

If a single computer costs for example £3,000 then you would need three or four at a cost of £12,000. By using Virtualization we can use a £6,000 computer to do all the tasks.

Virtualization saves money, energy, and space. After you’ve decided to go virtual, take steps to make implementation easier: Get to know some important terms about virtualization, types of virtualization, and leading companies and products in virtualization.

Then we have backup and Anti Virus control.

So the reasons for using Virtualization?

  • It saves money: Virtualization reduces the number of servers you have to run, which means savings on hardware costs and also on the total amount of energy needed to run hardware and provide cooling
  • It’s good for the environment: Virtualization is a green technology through and through. Energy savings brought on by widespread adoption of virtualization technologies would negate the need to build so many power plants and would thus conserve our earth’s energy resources. 
  • It reduces system administration work: With virtualization in place, system administrators would not have to support so many machines and could then move from fire fighting to more strategic administration tasks. 
  • It gets better use from hardware: Virtualization enables higher utilisation rates of hardware because each server supports enough virtual machines to increase its utilisation from the typical 15% to as much as 80%. 
  • Backing up is easier and each system doesn’t take from the other. They all have their separate resources rather than sharing them.

  • Licensing
    You need a base licence of Server with Hyper-V (The virtualiser)

    You need a Server Licence for each virtual machine. In our example two

    You need a licence for Exchange and SQL servers

    You need user licences (Cal’s for each concurrent user of each system)

    Why is Sage Line 50 so slow?

    I wrote this some time ago but after a conversation recently I thought I would resurrect it as it still seems to be very topical.

    Sage Line 50 is a nice program for small businesses but it has one major flaw – it has effectively killed off all the competition.  That’s not a bad thing because it means that it will go on for ever as a standard but it also means there is no incentive to make it any better.

    This means that Sage’s fatal flaw will probably never be fixed. It was designed to work on a single computer in a small office. The design was to store data on the local hard drive. The chosen format was sage’s own design of DTA files. I think that these. DTA files are pretty much unchanged since Graham Wylie’s original program was written for CP/M on an Amstrad PCW. When in a network it stores its data in disk files on a server and shared across a network..

    So why is Line 50 so slow? The problem with Sage’s strategy of storing data in shared files is that when you have multiple users the files are opened/locked/read/written by multiple users across a network at the same time. It stands to reason that on a non-trivial set of books this will involve a good number of files, some of them very large. Networks are comparatively slow compared to local disks, and certainly not reliable, so you’re bound to end up with locked file conflicts and would be lucky if data wasn’t corrupted from time to time. As the file gets bigger and the number of users grows, the problem gets worse exponentially

    Sage won’t admit it The standard Sage solution seems to be to tell people their hardware is inadequate. In a gross abuse of their consultancy position, some independent Sage vendors have been known to sell hapless users new high-powered servers, which does make the problem appear to go away. Until, of course, the file gets a bit bigger.

    Anyone who knows anything about networking will realise this straight away that this is a hopeless situation, but not those selling Sage – at least in public.
    In fact it’s in Sage’s interests to keep Line 50 running slower than a slug in treacle. Line 50 is the cheap end of the range – if it ran at a decent speed over a network, multi-user, people wouldn’t buy the expensive Line 200. The snag is that Line 50 is sold to small companies that do need more than one or two concurrent users and do have a significant number of transactions a day.

    There is continual talk that the newer versions will use a proper database, indeed in 2006 they announced a deal to work with mySQL. But the world has been waiting for the upgrade ever since. It’s always coming in “next year’s” release but “next year” never comes. The latest (as of December 2009) is that they’re ‘testing’ a database version with some customers and it might come out in version seventeen. (2014 update and it’s still not there yet). Its amazing that Sages other pet projects which they have bought in such as ACT! all use SQL.

    One Sage Solution Provider, realising that this system was always going to time-out in such circumstances, persuaded the MD of the company using it to generate all reports by sitting at the server console. To keep up the pretence this was a multi-user system, he even persuaded them to install it on a Windows Terminal Server machine so more than one person could use it by means of a remote session.

    If that weren’t bad enough, apparently it didn’t even work when sitting at the console, and they’ve advised the customers to get a faster router. I’m not kidding – this really did happen.
    The fact is that Sage Line 50 does not run well over a network due to a fundamental design flaw. It’s fine if it’s basically single-user on one machine.

    What can you do to fix it?
    If you accept that Sage Line 50 is fundamentally flawed when working over a network you’re not left with many options other than waiting for Sage to fix it. All you can do is throw hardware at it. But what hardware actually works?

    First the bad news – the difference in speed between a standard server and a turbo-nutter-spaceship model isn’t actually that great. If you’re lucky, on a straight run you might get a four-times improvement from a user’s perspective. The reason for spending lots of money on a server has little to do with the speed a user’s sees; it’s much more to do with the number of concurrent users.

    So, if you happen to have a really duff server and you throw lots of money at a new one you might see something that took a totally unacceptable 90 minutes now taking a totally unacceptable 20 minutes. If you spend a lot of money, and you’re lucky.

    The fact is that on analysing the server side of this equation I’ve yet to see the server itself struggling with CPU time, or running out of memory or any anything else to suggest that it’s the problem. With the most problematic client they started with a Dual Core processor and 4Gb of RAM – a reasonable specification for a few years back. At no time did I see issues to do with the memory size and the processor utilisation was only a few percent on one of the cores.

    I’d go as far as to say that the only reason for upgrading the server is to allow multiple users to access it on terminal server sessions, bypassing the network access to the Sage files completely. However, whilst this gives the fastest possible access to the data on the disk, it doesn’t overcome the architectural problems involved with sharing a disk file, so multiple users are going to have problems regardless. They’ll still clash, but when they’re not clashing it will be faster.

    But, assuming want to run Line 50 multi-user the way it was intended, installing the software on the client PCs, you’re going to have to look away from the server itself to find a solution.
    The next thing Sage will tell you is to upgrade to 1Gb Ethernet – it’s ten times faster than 100Mb, so you’ll get a 1000% performance boost. Yeah, right!

    It’s true that the network file access is the bottleneck, but it’s not the raw speed that matters.
    I’ll let you into a secret: not all network cards are the same.

    They might communicate at a line speed of 100Mb, but this does not mean that the computer can process data at that speed, and it does not mean it will pass through the switch at that speed. This is even more true at 1Gb.

    This week at I’ve been looking at some 10Gb network cards that really can do the job – communicate at full speed without dropping packets and pre-sort the data so a multi-CPU box could make sense of it. They cost  £500 each, they’re probably worth it from a performance point of view but you will need fast cable and fast switches too but more of switches later..

    Have you any idea what kind of network card came built in to the motherboard of your cheap-and-cheerful Dell? I thought not! But I bet it wasn’t the high-end type though.

    The next thing you’ve got to worry about is the cable. There’s no point looking at the wires themselves or what the LAN card says it’s doing. You’ll never know. Testing a cable has the right wires on the right pins is not going to tell you what it’s going to do when you put data down it at high speeds. Unless the cable’s perfect its going to pick up interference to some extent; most likely from the wire running right next to it. But you’ll never know how much this is affecting performance. The wonder of modern networking means that errors on the line are corrected automatically without worrying the user about it. If 50% of your data gets corrupted and needs re-transmission, by the time you’ve waited for the error to be detected, the replacement requested, the intervening data to be put on hold and so on your 100Mb line could easily be clogged with 90% junk – but the line speed will still be saying 100Mb with minimal utilisation.

    Testing network cables properly requires some really expensive equipment with wonderful names like time domain reflectometer, and the only way around it is to have the cabling installed by someone who really knows what they’re doing with high-frequency cable to reduce the likelihood of trouble. If you can, hire some proper test gear anyway. What you don’t want to do is let an electrician wire it up for you in a simplistic way. They all think they can, but believe me, they can’t.

    Next down the line is the network switch and this could be the biggest problem you’ve got. Switches sold to small business are designed to be ignored, and people ignore them. “Plug and Play”.

    You’d be forgiven for thinking that there wasn’t much to a switch, but in reality it’s got a critical job, which it may or may not do very well in all circumstances. When it receives a packet (sequence of data, a message from one PC to another) on one of its ports it has to decide which port to send it out of to reach its intended destination. If it receives multiple packets on multiple ports it has handle them all at once. Or one at a time. Or give up and ask most of the senders to try again later.

    What your switch is doing is probably a mystery, as most small businesses use unmanaged “intelligent” switches. A managed switch, on the other hand, lets you connect to it using a web browser and actually see what’s going on. You can also configure it to give more priority to certain ports, protect the network from “packet storms” caused by accident or malicious software and generally debug poorly performing networks. This isn’t intended to be a tutorial on managed switches; just take it from me that in the right hands they can be used to help the situation a lot.

    Unfortunately, managed switches cost a lot more than the standard variety. But they’re intended for the big boys to play with, and consequently they tend to switch more simultaneous packets and stand up to heavier loads.

    Several weeks back we upgraded the site with the most problems from good quality standard switches to some nice expensive managed ones, and guess what? It’s made a big difference. My idea was partly to use the switch to snoop on the traffic and figure out what was going on, but as expected  it also appears to have improved performance, and most importantly, reliability considerably too.

    If you’re going to try this, connect the server directly to the switch at 1Gb. It doesn’t appear to make a great deal of difference whether the client PCs are 100Mb or 1Gb, possibly due to the cheapo network interfaces they have, but if you have multiple clients connected to the switch at 100Mb they can all simultaneously access the server down the 1Gb pipe at full speed (to them).

    This is a long way from a solution, and it’s hardly been conclusively tested, but the extra reliability and resilience of the network has, at least allow a Sage system to run without crashing and corrupting data all the time.

    If you’re using reasonably okay workstations and a file server, my advice (at present) is to look at the switch first, before spending money on anything else.

    Any thing else?
    The other solutions are a). Invest in a good Unix server, its complicated but basically the server handles files faster and better than Microsoft OS’s that’s why all the big companies use it. b). Invest in a terminal server running remote desktop with the data held locally, this way all the processing is done on one machine – the server and then then results are sent out to the workstations

    ……………… or wait for Sage to fulfil their 2006 promise, its only been eight years and did I tell you it’s not in their interest?

    Congratulations, you made it to 2014 – It’s a good year to be tax efficient and start buying some things – NOT!


    If your car is newer than your computer equipment then you are probably making the same mistakes as RBS, Lloyds and 36% of the businesses who are using XP.
    Lets face it, business has not been good or moving in an upward direction since 2008. Yes things are more optimistic now but you may be one of the many businesses facing some problems, for some it is cash flow, for others its funding the extra business, lets face it there is now a danger of overtrading, where you don’t want to run out of liquidity. You also probably know deep down that you need new IT equipment, there are two temptations, buy the cheapest possible or soldier on until it breaks. OUCH! that’s expensive.
    Neither are real options, so you have decided to choose the right equipment from the right supplier but you didn’t realise how expensive computer equipment is, surprise, surprise, its gone up since 2008
    Sourcing funds to buy “stuff”
    You have a few options, use your cash or borrow it, certainly in an upturn you can make enough money to fund the borrowing but with the bank charging 6.5% for a low cost loan and credit cards still charging 20%+ it doesn’t make much sense, especially if you can only claim a small amount back against tax.
    There is a way that works, you can claim everything against tax and spread your cost over a few years. Because interest rates are low its a good time as the rate is fixed at the time you sign the deal you won’t get a shock when interest rates go up.
    Lets look at £10,000 of equipment, which is a mid range server, some PC’s installed with software.
    If you pay cash, you have taken that out of your cash flow. You may have to borrow the money and you can only claim back 20% back against tax. The borrowing rate may also change.
    To lease the equipment over four years it will cost you £70 a week. Over four years it works our around 4.4% interest per annum and you can offset all the payments against tax. There is a residual buy back agreement that allows you to buy the equipment at the end of the term, usually for one quarters payment.
    If you want to protect your cash flow, spread your costs, save money and have maximum tax efficiency then its the best way to have some “stuff”
    WE are we going on about it now, simple really, before 2008 it was cheap to borrow money and no one needed leasing as everyone was rolling in cash and life was good so we stopped offering leasing in 2000. It makes more sense now than ever, low interest rates and a need to keep cash flow going, especially for companies damaged in th recession it makes perfect sense for us to start offering leasing again.
    We are the intermediary, we don’t get commission or any benefits but we like to offer our clients something that the others don’t. Experience tells us what is the best choice for our clients.