Moving to bare metal

Status
Not open for further replies.

krooney

Member
Jun 18, 2018
160
16
18
Hi everyone,

We are looking to move from a Digital Ocean VM to a bare metal server was wondering if anyone had any tips on setup we will be using a Dell R430 dual CPU 32 cores 64 gig of ram.
We are just thinking of installing Fusion directly to the server and keeping the D.O. VM as a backup in case of failure.

Any tips, ideas, tweaks or recommendations would be great.

Thanks in advance
 

Adrian Fretwell

Well-Known Member
Aug 13, 2017
1,414
376
83
I would need a very, VERY, good reason to go to bare metal. You have a nice sounding machine there, why not load your own hypervisor, in my opinion XCP-ng is one of the best and its Open Source.

https://xcp-ng.org/
 
  • Like
Reactions: krooney

Kenny Riley

Active Member
Nov 1, 2017
243
39
28
36
Wouldn't server load be a good reason to go bare metal? I love the idea of virtualization from a DR perspective and snapshots but have always been under the impression and been told by FusionPBX support themselves that it's recommended to run on bare metal for reliability on a busy server. Our setup isn't huge but decent... 80 domains and about 500 endpoints with about 40 simultaneous calls going on at peak during the day.
 

Adrian Fretwell

Well-Known Member
Aug 13, 2017
1,414
376
83
First, define a busy server :) ...

Don't you have to manage load on bare metal in just the same way as with a hypervisor? At least if you own the hypervisor you can manage your own CPU priorities. And when it come time to move a busy FusionPBX to new hardware you can do it without even shutting it down - Try doing that with bare metal!

From a fusionPBX support perspective it is the easiest thing to say "run on bare metal for reliability" and I can see why when I see the poor performance of some Virtual Machine offerings, but don't forget not all hypervisors are created equal.
 
Last edited:

KonradSC

Active Member
Mar 10, 2017
166
98
28
This is a very divisive topic. :)
Android vs iOS, Chevy vs Ford, Meat Eater vs Vegetarian. Metal vs Virtual.

We moved to bare metal for our primary servers years ago and haven't looked back. I was grew tired of questioning the hypervisor every time there was some kind of blip in call quality or signaling. Everything else is virtual though including our backup freeswitch servers.

My tip. Install ifenslave for NIC redundancy.

My /etc/network/interfaces file looks something like this. (I scrubbed the IP's)
Code:
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
   
    source /etc/network/interfaces.d/*
   
    # The loopback network interface
    auto lo
    iface lo inet loopback
   
    # Enslave these interfaces
            allow-hotplug eno1
            allow-hotplug eno2
            allow-hotplug enp3s0f0
            allow-hotplug enp3s0f1
   
    #Public
    auto bond0
            iface bond0 inet static
            address 1.1.1.172
            netmask 255.255.255.192
            broadcast 1.1.1.191
            gateway 1.1.1.129
            dns-nameservers 8.8.8.8 4.2.2.2
            bond-mode 1
            bond-miimon 100
            bond-primary eno1
            bond-slaves eno1 enp3s0f0
            bond-updelay 200
            bond-downdelay 200
   
    #Private
    auto bond1
            iface bond1 inet static
            address 192.168.220.12
            netmask 255.255.255.0
            broadcast 192.168.220.255
            dns-nameservers 8.8.8.8 4.2.2.2
            bond-mode 1
            bond-miimon 100
            bond-primary eno2
            bond-slaves eno2 enp3s0f1
            bond-updelay 200
            bond-downdelay 200
            post-up ip route add 192.168.0.0/16 via 192.168.220.1
            post-up ip route add 172.16.0.0/16 via 192.168.220.1
            post-up ip route add 10.0.0.0/8 via 192.168.220.1
            post-up ip route add 172.26.0.0/16 via 192.168.220.1
   
    #Alias for floating IP
            iface bond0:0 inet static
            address 1.1.1.167
            netmask 255.255.255.192
 

KonradSC

Active Member
Mar 10, 2017
166
98
28
Yes. In my config I have 2 public nics and 2 private nics.

That config is for failover, not for an aggregate link. FreeSWITCH support was not very keen on a port channel and I had trouble getting that working. A 1 gig or 10 gig interface with a failover should be plenty anyway.
 

krooney

Member
Jun 18, 2018
160
16
18
I have another question so i am renting a 1U space in colocation and the datacenter gave me the following info (i modified the IP's) public ip 170.0.0.99/29
Gateway 170.0.0.1
Customer IP 170.0.0.102 to 104
Since im going straight bare metal and they are giving 1 connection what IP should i configure on the Nic the public one.

Thanks in advance @KonradSC @Adrian Fretwell
 

Adrian Fretwell

Well-Known Member
Aug 13, 2017
1,414
376
83
These IP addresses don't quite make sense to me.

It looks like they are giving you a /29 subnet. This would normally give you 6 useable IP addresses, one of which must be the gateway, leaving 5 for your machines.
It is normal practice to have the gateway within the subnet. 170.0.0.1 is not within your 170.0.0.99/29 subnet. In fact, 170.0.0.99/29 would just be a host IP within your subnet. With these IPs your network would be a class B:

Network: 170.0.0.96/29
Broadcast: 170.0.0.103

Your first usable IP would be 170.0.0.97 going through to 170.0.0.102. If you are only interested in using one single IP address, then it doesn't really matter which of the usable ones you choose.

Also, I don't get what they mean by public ip, they are all public!

Maybe they are expecting you to run some routing software or something like Open VSwitch, to give you virtual NICs.
 

krooney

Member
Jun 18, 2018
160
16
18
These IP addresses don't quite make sense to me.

It looks like they are giving you a /29 subnet. This would normally give you 6 useable IP addresses, one of which must be the gateway, leaving 5 for your machines.
It is normal practice to have the gateway within the subnet. 170.0.0.1 is not within your 170.0.0.99/29 subnet. In fact, 170.0.0.99/29 would just be a host IP within your subnet. With these IPs your network would be a class B:

Network: 170.0.0.96/29
Broadcast: 170.0.0.103

Your first usable IP would be 170.0.0.97 going through to 170.0.0.102. If you are only interested in using one single IP address, then it doesn't really matter which of the usable ones you choose.

Also, I don't get what they mean by public ip, they are all public!

Maybe they are expecting you to run some routing software or something like Open VSwitch, to give you virtual NICs.
Sorry i modified the original ip's this more like what they sent me does this make more sense

Internet
170.0.0.128/29

VRRP GW: 170.0.0.129

Datacenter R1: .130

Datacenter R2: .131

Cust IP: .132-134
 

Adrian Fretwell

Well-Known Member
Aug 13, 2017
1,414
376
83
Yes, that makes perfect sense.

170.0.0.128/29 is your network which will have usable IPs from 170.0.0.129 to 170.0.0.134. So the first IP is used as your gateway. This is a "virtual" IP for the Virtual Router Redundancy Protocol (VRRP) and I assume that 130 and 131 are the real IPs used underneath the VRRP.

So I would configure 132 on the NIC. There is, of course, nothing stopping you configuring multiple IPs on the same NIC, it just depends on how you want to run the box.
 
  • Like
Reactions: krooney

hfoster

Active Member
Jan 28, 2019
676
80
28
34
One thing I will chip in with, is that you do have to think about management complexity with virtualisation no matter the platform really. It's all 'worth it' to get snapshots, HA, flexibility, etc. but my god it can be incredibly spooky having to delve into updating these things without extensive training, even more so when you've got shared storage on SANs and more enterprise-y features enabled.

Know your hypervisor platform well, and know how to recover entirely from scratch. With voice as well, you really have to never-ever over provision either.
 

John

Member
Jan 23, 2017
97
8
8
I started with a VPS 8 years ago. I hated it and moved to bare metal. This was because I did not have the knowledge of virtualizing my own bare metal. I guess there were not also much options other than VMware for virtualization which was more expensive than a bare metal.

Now there are many options for open sourced virtualization, and I became certified for one of them. After 6 years on bare metal, I virtualized my bare metal and installed Fusion on a VPS. I am extremely happy. I just feel like I moved from stone ages to the modern era. Being on someone's VPS differs greatly from being on your own virtualized bare metal.

I am now trying to learn Open Stack. In fact, that is the future. Virtualization is the first step to the modern era Open Stack is the next.

The time it saves me, the fast reboot, the confidence of snapshots and scheduled backups for development, the easy to download images and knowing that I can in a few minutes use the backup and install it on another server.

Never bare metal again:)
 
Last edited:
  • Like
Reactions: Adrian Fretwell

Adrian Fretwell

Well-Known Member
Aug 13, 2017
1,414
376
83
I completely agree John. The key is running your own virtualization server(s). We started off with Citrix Xenserver and then moved to XCP-ng. We have been fully virtualized since 2016 and never had a problem with it. ...And being able to live migrate a running FusionPBX from once server to another with live calls going on is just amazing!

@John I would love to hear more about what you are doing with Open Stack.
 
  • Like
Reactions: krooney and John

John

Member
Jan 23, 2017
97
8
8
@Adrian Fretwell I am just finished with my master's degree. I call it "I am out of my grave." So now I have my freedom and a long list of things that I am excited to learn. Open Stack is definitely one of them, as it is the future of cloud or I should say the real cloud.

I will definitely share what I am going to learn. So far, I know the theory and the structure of it, not the practical part of it yet.
 

bcmike

Active Member
Jun 7, 2018
326
54
28
53
We started with and still have a lot of customers on our bare metal servers, each in an A/B hot spare configuration. There are some advantages for sure with bare metal but virtualization adds so much more flexibility. That said we recently had a bad experience with Proxmox running KVM machines where a bunch of processes went max CPU (We suspect a ZFS I/O storm) and killed the whole machine, or should I say left it just running enough so HA wouldn't kick in and when we failed over manually to other machines in the cluster it was a huge nightmare. Some of this was ultimately human error and bad configuration choices but it caused us to take a bit of a hybrid approach.

We're rebuilding the whole cluster from the ground up, again with Proxmox but with a much different approach. Here are the Bullitt points:
  • No ZFS. It used a lot of memory that it often failed to relinquish and it's suspected that it was the root of our CPU issue. We're back to EXT4 thin volumes on caching hardware raid controllers and a series of three mirrors over six drives. Not the fastest but the best data integrity in our opinion. FYI we always only use local storage.
  • No more clustering. This was tough as live migration also saved a lot of work many times, but clustering also caused a lot of heart ache and never really worked in emergencies as anticipated.
  • All VMS will have a hot spare running on a different host at all times and they'll be striped across machines. We'll handle replication the old fashioned way with rsync scripts, etc..
  • We're also going to incorporate the new Proxmox back up server.
This sounds like regression and probably is but we can still do things like snapshot machines, backup machines, and migrate them manually. It sort of mimics our bare metal hot A/B approach but in a virtualized environment.

As Scotty once said: "Sometimes the more you overdue the plumbing, the easier it is to stop up the drain"

Oh and by the way, Chevy... ;)
 

Davesworld

Member
Feb 1, 2019
90
11
8
64
ZFS doesn't make much sense unless you have more than one virtual disk and mirror, it's all but useless on a single partition, will tell you about errors but cannot correct them as far as I know. I have run it it on two identical partitions before but for a sip server, a BTRFS mirror is plenty solid, Where ZFS shines is that raid 5 6 and 7 (not available on BTRFS anyway) actually are reliable. Anything more than a mirror for /var/lib/freeswitch is way overkill for a sip server. I also direct my backups to that partition so I never lose my database backups as I never re-format that partition on a clean install.
 

Davesworld

Member
Feb 1, 2019
90
11
8
64
Wouldn't server load be a good reason to go bare metal? I love the idea of virtualization from a DR perspective and snapshots but have always been under the impression and been told by FusionPBX support themselves that it's recommended to run on bare metal for reliability on a busy server. Our setup isn't huge but decent... 80 domains and about 500 endpoints with about 40 simultaneous calls going on at peak during the day.
I like bare metal where possible. For a small number of users a single cheap VPS works well and as others have said, it's easy to move and/or duplicate. I have used both.
 

bcmike

Active Member
Jun 7, 2018
326
54
28
53
ZFS doesn't make much sense unless you have more than one virtual disk and mirror, it's all but useless on a single partition, will tell you about errors but cannot correct them as far as I know. I have run it it on two identical partitions before but for a sip server, a BTRFS mirror is plenty solid, Where ZFS shines is that raid 5 6 and 7 (not available on BTRFS anyway) actually are reliable. Anything more than a mirror for /var/lib/freeswitch is way overkill for a sip server. I also direct my backups to that partition so I never lose my database backups as I never re-format that partition on a clean install.
We used RAID 6 over six drives. Performance was terrible
 
Status
Not open for further replies.