I have a small Shuttle Barbebone computer which I’m mainly using as a KVM hypervisor on top of Ubuntu Server 14.04 to run a few VMs. Since the Barebone also sports a HDMI port and the CPU comes with an integrated Intel HD GPU I thought it would be a great Kodi (ex XBMC) mediacenter as well. However, I’ve been unable to find a working walk-through on how to install it on Ubuntu Server. Most likely because nobody ever does this on a server OS. Anyway, here’s how to install the latest Kodi release on Ubuntu Server 14.04 including hardware acceleration for the Intel HD GPU. Continue reading “How to install Kodi on Ubuntu Server 14.04”
A few dozen times each day, the Xeon E3-1275 v3 CPU on my SuperMicro X10SLM-F board generates a Machine Check Event (MCE). The Linux kernel logs all MCEs in
mce: [Hardware Error]: Machine check events logged mce: [Hardware Error]: Machine check events logged CMCI storm detected: switching to poll mode CMCI storm subsided: switching to interrupt mode mce_notify_irq: 14 callbacks suppressed mce: [Hardware Error]: Machine check events logged mce: CPU supports 9 MCE banks mce: [Hardware Error]: Machine check events logged
mcelog I was able to pull some more detailed information about the check events:
Hardware event. This is not a software error. MCE 0 CPU 3 BANK 0 TIME 1415087019 Tue Nov 4 08:43:39 2014 MCG status: MCi status: Corrected error Error enabled MCA: Internal parity error STATUS 90000040000f0005 MCGSTATUS 0 MCGCAP c09 APICID 6 SOCKETID 0 CPUID Vendor Intel Family 6 Model 60
The MCEs all look the same (affected is always BANK 0), just the CPU and the APICID may differ. I updated the BIOS, replaced the ECC RAM, replaced the mainboard but the errors kept showing up. Continue reading “QEMU on Haswell causes spurious MCE events”
This is a heads up for everybody with an Nvidia GTX 760 (other Nvidia cards may be affected as well) trying to install OS X Yosemite 10.10 using Clover on a Hackintosh. If you’re getting a blank/black screen at the start of the installation, try to add the boot flag
nv_disable=1. My screen was getting dark just after the installer displayed
DSMOS has arrived when using the
-v verbose boot flag. It always happened right after the installer was switching from text mode to graphics mode.
<key>Boot</key> <dict> <key>Arguments</key> <string>dart=0 -v kext-dev-mode=1 nv_disable=1</string> ...(more)
Once OS X Yosemite has been installed, the nv_disable boot flag is no longer required and should be removed.
I’m running some sort of an experimental KVM guest with IPv6 connectivity only. Since it still had Ubuntu Server 13.10 installed I tried to run a
do-release-upgrade on it to upgrade it to the latest Ubuntu Server release – which at the time of this writing is 14.10. However, the
do-release-upgrade command kept saying that no new release could be found:
root@ipv6lab:~# do-release-upgrade Checking for a new Ubuntu release No new release found
I verified the
/etc/update-manager/release-upgrades configuration file but it already contained the
Prompt=normal line. After doing some digging I found out that the
do-release-upgrade tries to connect to http://changelogs.ubuntu.com but there is no AAAA DNS record for this host. Essentially, this means that an Ubuntu server can’t be upgraded to a newer release over IPv6 because it can’t connect to the update info site over IPv6.
root@ipv6lab:~# dig +short changelogs.ubuntu.com A 18.104.22.168 root@ipv6lab:~# dig +short changelogs.ubuntu.com AAAA root@ipv6lab:~#
Interestingly, the Ubuntu APT repository update site is accessible over IPv6, which is why something like
apt-get update runs fine on IPv6-only Ubuntu servers.
I solved the problem by creating an IPv6 to IPv4 HTTP proxy using HAProxy on a IPv4/IPv6 dual stack server. The proxy listens on an IPv6 address and “tunnels” all requests to changelogs.ubuntu.com using the IPv4 address of the changelogs server. I was able to upgrade to a newer Ubuntu release this way on an IPv6-only Ubuntu server. Continue reading “Ubuntu release upgrade says ‘no new release found’ on IPv6-only server”
I’ve been using command-line commands or the Clover Configurator to mount Clover’s EFI partition to edit Clover’s main configuration file.
However, I find it easiest to mount the hidden EFI volume in Disk Utility:
The hidden partitions will only be shown if Disk Utility’s debug mode has been enabled. In a shell, type:
defaults write com.apple.DiskUtility DUDebugMenuEnabled 1
Start Disk Utility and enable the option to show all partitions:
Ever since I fusioned a SSD and a HDD into an OS X Fusion Drive, Clover has been unable to auto-boot the new logical Fusion Drive volume. Clover was just sitting on its startup volume selection screen and was waiting for me to select the volume to boot. I’ve found some hints that using an UUID should make Clover autoboot the Fusion drive but I’ve been unable to make it work with any of the UUIDs of the logical/physical volume.
What finally worked was using the system ID (or whatever this is called) of the volume. Here’s an excerpt from my Clover configuration:
<key>Boot</key> <dict> <key>Arguments</key> <string>dart=0</string> <key>DefaultVolume</key> <string>HD(3,GPT,17337FC1-A0F7-4C73-DEA1-363BA11AB811,0x3A346008,0x40000)</string> <key>Timeout</key> <integer>5</integer>
With this ID, Clover auto-boots my Fusion Drive volume just fine after waiting for 5 seconds for user input.
The full IDs can be found in Clover’s log file in
/Library/Logs/CloverEFI/ and look like this:
system.log:0:837 0:000 PciRoot(0x0)\Pci(0x1F,0x2)\Sata(0x0,0xFFFF,0x0)\HD(3,GPT,17337FC1-A0F7-4C73-DEA1-363BA11AB811,0x3A346008,0x40000)
You have to strip the PciRoot/Sata part for Clover.
Since OS X Yosemite, the CoreStorage service allows you to rename the logical volume name of a Fusion Drive if you wish to do so.
sudo diskutil cs rename "Macintosh HD" "Fusion Drive"
The Fusion Drive now shows up as “Fusion Drive” instead of “Macintosh HD” which was the name I’ve chosen initially. The OS X main volume is still named “Macintosh HD”.
Sep 29 19:19:41 wopr sshd: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
If you’re getting this error message in the log file, you most likely have the ed25519 HostKey enabled in your sshd_config file but for some reason, no host key was generated for it.
Since openssh-6.4 you can run the ssh-keygen command to generate any missing host keys:
$ ssh-keygen -A ssh-keygen: generating new host keys: ED25519
Usually, my first step after setting up a new Ubuntu guest is to enable console access in order gain shell access on the newly created VM.
Step 1 – Activate the serial console in the guest
Change the GRUB_CMDLINE_LINUX_DEFAULT to:
Don’t forget to update Grub
Step 2 – Create the serial console in the guest
cp /etc/init/tty1.conf /etc/init/ttyS0.conf nano /etc/init/ttyS0.conf
Edit ttyS0.conf and replace the tty1 with ttyS0 in the last line so it will read something like “exec /sbin/getty -8 38400 ttyS0”.
Reboot the VM.
Step 3 – Log in from the host
virsh console myvm
This is it! You just gained console access to your VM.
Tip: To exit the console, hit CTRL-]. It doesn’t matter where the ] is located on your keyboard, you have to press the key below the <BACKSPACE> key and on the left side of the <ENTER> key.
At the time of this writing, OVH is not providing a CoreOS installation template for the Kimsufi servers. Since there is no virtual KVM console available for the entry level servers, I tried to use OVH’s iPXE API. This approach would have worked well weren’t it for the CoreOS installer which tries to load binaries in the installation script after overwriting the same partition – which always results in a segfault. Also, the API is only available for the older Kimsufi 2G servers on OVH’s V6 control panel, not for the current Kimsufi servers for which OVH doesn’t provide an API at this time. Fortunately, OVH provides a “rescue mode” which lets us boot from an USB stick which is permanentely plugged in on all Kimsufi servers. Continue reading “How to install CoreOS on an OVH Kimsufi low-end dedicated server”
Here’s an overview of natively supported PCI-e (64-bit) network interface controllers (NIC) for OS X. I’ve had the chance to test some of them in my current Hackintosh build.
HP NC360T PCI-Express PRO/1000
The HP NC360T dual port PCI-e network adapter works out of the box in OS X. However, since OS X 10.8.2 Apple changed something in the driver resulting in a link loss whenever the network is under considerable load. If this happens, the network can be brought back to life by deactivating/reactivating the network in OS X’s control panel. Do not buy this network card if you intend to use it in a recent OS X version.
Continue reading “Native Gigabit PCI-e Network Adapter / NIC for OS X”
Updated 2015-10-21: This rig still works and performs awesomely on OS X El Capitan 10.11! I’m able to run it without any unsigned drivers and full system protection. csrutil says “System Integrity Protection status: enabled.”
My ASUS P6T Hackintosh died because of a busted capacitor on the motherboard. I had to decide wether to buy an Apple desktop computer or to build a new Hackintosh. I would have bought an (internally) expandable Mac Pro but the current trash can just doesn’t appeal to me.
I had three goals for the new build:
- Since kernel extension signing is mandatory in OS X Yosemite (at least in the dev previews/public beta versions), it has to be as vanilla as possible.
So, without further ado, here’s the new build:
- Mainboard: GIGABYTE GA-Z97X-UD5H
- CPU: Intel 4790K Devil’s Canyon with a Noctua NH-U14S fan
- RAM: 16 GB Patriot DDR3 2400 MHz
- Graphics: GIGABYTE GTX 760 OC
- Case: Fractal Define XL R2 Black Pearl
- PSU: be quiet! Straight Power E9 (580W)
- Audio: Turtle Beach Audio Advantage Amigo II USB Adapter
- NIC: Intel Gigabit CT Desktop
- Bluetooth: IOGear GBU521
- Storage: Several SSDs and HDDs
The Z97X-UD5H uses Intel’s latest 9 Series chipset which to this date is not being used in any Apple computer. There’s a good chance Apple will use this chipset in the next iMac refresh in Q3/Q4 ’14. Even though the chipset is not officially supported in OS X, it runs just fine, even without a custom DSDT/SSDT! Continue reading “New Hackintosh build based on GIGABYTE GA-Z97X-UD5H”
I prefer strongSwan over Openswan because it’s still in active development, easier to setup and doesn’t require a L2TP daemon. I prefer a simple IKEv1 setup using PSK and XAUTH over certificates. If you plan to share your VPN server with your friends it’s also a lot easier to setup for them without certificates. I haven’t tried the VPN configuration below with non-Apple clients but it works well with iOS and OS X clients. Make sure to use the Cisco IPSec VPN profile, not the L2TP over IPSec profile you need for Openswan. While strongSwan works well with KVM and Xen containers, it probably won’t work with non-virtualised containers like OpenVZ or LXC. Continue reading “strongSwan 5 based IPSec VPN, Ubuntu 14.04 LTS and PSK/XAUTH”
LXC is awesome! You can create and start your own virtual container with just 3 commands in Ubuntu 14.04.
apt-get install lxc debootstrap lxc-templates
lxc-create -t ubuntu -n demo
lxc-start -n demo -d
It doesn’t get any easier than this. There’s even a Boostrap-based fronted available: LXC Web Panel.
Unfortunately, LXC Web Panel doesn’t work with LXC 1.0 which is part of Ubuntu 14.04. Fortunately though, there’s a fork available on GitHub which adds support for LXC 1.0:
I re-forked claudyus’ LXC Web Panel fork and added support for Ubuntu 14.04 and a few other things. My forked fork is available here: https://github.com/trick77/LXC-Web-Panel Claudyus has already updated his repository with my changes.
By the way, the original author of LXC Web Panel said he’s currently working on a Bootstrap 3 based version for LXC 1.0 which will include a RESTful API and other new features. Make sure to follow this guy on GitHub: https://github.com/googley
This may help setting up your own DNS unblocking solution:
Once everything has been set up properly, all ticks should be green like in this screenshot:
I just pushed another update to GitHub, please make sure to use a configuration generated with the latest generator version or the tester will fail. My main motivation to create this tester was to reduce the amount of support requests I’m receiving. Let’s see how well this goes :-)