Query status information from Huawei’s HiLink 3G/LTE modems

While Huawei provides status information for its HiLink modems via a web page, this is hardly useful when using the modem on a headless Linux server. I just published a small Python-based command-line tool on Github which displays some useful information about the modem’s status.

root@wopr~#: python ./hstatus.py
Huawei E3372 LTE Modem (IMEI: 121032526613216)
 Hardware version: CL1D3271AM Ver.A
 Software version: 22.286.53.01.161
 Web UI version: 16.100.05.00.03-Mod1.4
 Serial: L8FDW11512114431
 MAC address (modem): 00:0D:87:12:1C:1D
 Connection status: Connected
   Network type: UMTS (3G)
   Signal level: ▁▃▄▆█
   Roaming: Enabled
   Modem WAN IP address: 10.197.75.231
   Public IP address: 185.13.106.181
   DNS IP addresses: 212.113.0.5, 66.28.0.62
   Network operator: Swisscom
   Connected for: 03:15:15 (hh:mm:ss)
   Downloaded: 615.17 KB
   Uploaded: 258.69 KB
 Total downloaded: 14.69 MB
 Total uploaded: 1.34 MB
 Unread SMS: 1

The tool has been tested on a Huawei E3276 and a E3372 modem. For the newer E3372 modem I had to add some code to supply a RequestVerificationToken in the HTTP header.

Feel free to send a pull request on Github with your own tweaks!

The repository is available here: github.com/trick77/huawei-hilink-status

How to receive Cymru’s IPv6 Bogon list using Quagga

The provided BGP sample configuration for Quagga on Cymru’s web site didn’t work for me. Since my AS is IPv6-only, I’m only interested in the IPv6 Bogon feed. Here’s an excerpt from my Quagga bgpd.conf:

router bgp aut-num
bgp router-id id
bgp log-neighbor-changes
no bgp default ipv4-unicast

neighbor cymru-bogon peer-group
neighbor cymru-bogon remote-as 65332
neighbor cymru-bogon timers 3600 10800
neighbor cymru-bogon description AS65332 Cymru FullBogon Feed
neighbor cymru-bogon ebgp-multihop 255
neighbor cymru-bogon password changeme
neighbor cymru-bogon activate
neighbor cymru-bogon prefix-list pl-cymru-ipv4-in in
neighbor cymru-bogon prefix-list pl-cymru-out out
neighbor 38.xx.xx.xx peer-group cymru-bogon
neighbor 193.xx.xx.xx peer-group cymru-bogon

address-family ipv6
  neighbor cymru-bogon activate
  neighbor cymru-bogon soft-reconfiguration inbound
  neighbor cymru-bogon route-map rm-cymru-ipv6-in in
  neighbor cymru-bogon prefix-list pl-cymru-ipv6-out out
  neighbor 38.xx.xx.xx peer-group cymru-bogon
  neighbor 193.xx.xx.xx peer-group cymru-bogon
exit-address-family

ip prefix-list pl-cymru-ipv4-in seq 5 deny any
ip prefix-list pl-cymru-out seq 5 deny any
ipv6 prefix-list pl-cymru-ipv6-out seq 5 deny any
ip community-list 10 permit 65332:888

route-map rm-cymru-ipv6-in permit 10
  match community 10
  set ip next-hop 192.0.2.1
  set ipv6 next-hop global 100::dead:beef:1

Since Zebra won’t install routes learned over BGP that are not routable, I also needed to make sure that 100::dead:beef:1 is (null-)routed. My solution was to install a Cisco-style Null0 interface in /etc/network/interfaces:

# blackhole
iface Null0 inet manual
  pre-up ip link add dev Null0 type dummy
  pre-up ip link set Null0 up
  up ip -6 route add 100::/64 dev Null0 proto static metric 255
  up ip -4 route add 192.0.2.1/32 dev Null0 proto static metric 255
  down ip link del dev Null0

By the way, that 100::/64 I’m using to null-route is a designated (RFC6666) IPv6 discard-only address block.

Once the BGP session is up, only IPv6 routes will be learned from Cymru’s bogon feed. I’m using IPv4 transport for the BGP session but it should work using IPv6 transport as well.

BGP neighbor is 38.xx.xx.xx, remote AS 65332, local AS xxxxx, external link
 Member of peer-group cymru-bogon for session parameters
  BGP version 4, remote router ID 38.xx.xx.xx
  BGP state = Established, up for 18:52:18
  Last read 00:11:49, hold time is 10800, keepalive interval is 3600 seconds
  Configured hold time is 10800, keepalive interval is 3600 seconds
  Neighbor capabilities:
    4 Byte AS: advertised and received
    Route refresh: advertised and received(old & new)
    Address family IPv4 Unicast: advertised and received
    Address family IPv6 Unicast: advertised and received
  Message statistics:
    Inq depth is 0
    Outq depth is 0
                         Sent       Rcvd
    Opens:                  1          1
    Notifications:          0          0
    Updates:                0        118
    Keepalives:            20         19
    Route Refresh:          0          0
    Capability:             0          0
    Total:                 21        138
  Minimum time between advertisement runs is 30 seconds

 For address family: IPv4 Unicast
  cymru-bogon peer-group member
  AF-dependant capabilities:
    Outbound Route Filter (ORF) type (128) Prefix-list:
      Send-mode: received
  Community attribute sent to this neighbor(both)
  Inbound path policy configured
  Outbound path policy configured
  Incoming update prefix filter list is *pl-cymru-ipv4-in
  Outgoing update prefix filter list is *pl-cymru-out
  0 accepted prefixes

 For address family: IPv6 Unicast
  cymru-bogon peer-group member
  Inbound soft reconfiguration allowed
  Community attribute sent to this neighbor(both)
  Inbound path policy configured
  Outbound path policy configured
  Outgoing update prefix filter list is *pl-cymru-ipv6-out
  Route map for incoming advertisements is *rm-cymru-ipv6-in
  60088 accepted prefixes

  Connections established 1; dropped 0
  Last reset never
  External BGP neighbor may be up to 255 hops away.
Local host: 185.xx.xx.xx, Local port: 59623
Foreign host: 38.xx.xx.xx, Foreign port: 179
Nexthop: 185.xx.xx.xx
Nexthop global: 2001:xxxx:xxxx::
Nexthop local: fe80::225:xxxx:xxxx:xxxx
BGP connection: non shared network
Read thread: on  Write thread: off

Setting up a Huawei E3276-150 4G/LTE USB modem on Ubuntu Server/Desktop

I just received an unlocked Huawei E3276s-150 4G/LTE USB modem/surfstick I bought on eBay the other day. I went for the E3276s-150 because the 150 seemed to be the most compatible option for European 4G mobile networks. There are even cheaper Huawei E3276 models like the E3276-920 which you can buy for less than 20 bucks. However, the 920 seems to be frequency-optimized for Asian mobile networks and may not perform as well as a E3276s-150 in Western Europe.
huawei_e3276_lte_cat4_usb_dongle
To my great surprise, setting up the Huawei E3276 on Ubuntu 15.04 Desktop was literally plug & play. After a few seconds after plugging it in, I was greeted with a “Connection Established” message. Nicely done, Canonical!

On Ubuntu Server, like most Huawei modems, the stick is recognised as a memory card reader. It has to be switched to a USB modem device first using the usb_modeswitch command in order to establish a mobile network connection. If it’s not already installed, usb_modeswitch can be installed using apt-get -y install usb-modeswitch.

Memory card reader mode:

drfalken@wopr:~# lsusb
Bus 001 Device 007: ID 12d1:1f01 Huawei Technologies Co., Ltd.

To turn the E3276 into a modem:

drfalken@wopr:~# usb_modeswitch -v 12d1 -p 1f01 -M '55534243123456780000000000000011062000000101000100000000000000'

If the change was successful, lsusb shows a different USB product id now:

drfalken@wopr:~# lsusb
Bus 001 Device 007: ID 12d1:14db Huawei Technologies Co., Ltd.

At the same time, dmesg should output something like this:

drfalken@wopr:~# dmesg -T
[Fri May 29 20:55:41 2015] usb 1-1: New USB device found, idVendor=12d1, idProduct=14db
[Fri May 29 20:55:41 2015] usb 1-1: New USB device strings: Mfr=2, Product=1, SerialNumber=0
[Fri May 29 20:55:41 2015] usb 1-1: Product: HUAWEI Mobile
[Fri May 29 20:55:41 2015] usb 1-1: Manufacturer: HUAWEI Technology
[Fri May 29 20:55:41 2015] cdc_ether 1-1:1.0 eth1: register 'cdc_ether' at usb-0000:00:14.0-1, CDC Ethernet Device, 57:2d:70:33:22:10

Since the modem registered itself on eth1 (the name depends on the number of network devices, it doesn’t HAVE to be on eth1), we now simply fetch an IP address from the modem using:

drfalken@wopr:~# dhclient -v eth1
Internet Systems Consortium DHCP Client 4.2.4
Copyright 2004-2012 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/eth1/57:2d:70:33:22:10
Sending on   LPF/eth1/57:2d:70:33:22:10
Sending on   Socket/fallback
DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 3 (xid=0x3b73326b)
DHCPREQUEST of 192.168.1.100 on eth1 to 255.255.255.255 port 67 (xid=0x3b73326b)
DHCPOFFER of 192.168.1.100 from 192.168.1.1
DHCPACK of 192.168.1.100 from 192.168.1.1
bound to 192.168.1.100 -- renewal in 36557 seconds.

Yay, the modem has made itself available on 192.168.1.100 (it even has a web interface on port 80) with a /24 prefix and a gateway at 192.168.1.1.
By the way, make sure none of your local networks use 192.168.1.0/24 or it will collide with the Huawei’s local network.

Depending on a few factors dhclient may or may not have changed the default gateway. If the default gateway points to the modem, it will be at 192.168.1.1 on eth1:

drfalken@wopr:~# ip route show | grep default
default via 192.168.1.1 dev eth1

If this is not the case, you may have to remove the existing default gateway and replace it using:

drfalken@wopr:~# ip route del default ; ip route add default via 192.168.1.1

And… connected!

drfalken@wopr:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=22.7 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=34.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=39.7 ms

Make sure /etc/resolv.conf contains a valid nameserver if you can’t resolve domain names.

To switch the Huawei E3276 into a modem at boot time, create /etc/udev/rules.d/70-usb-modeswitch.rules and insert this line:

ACTION=="add", SUBSYSTEM=="usb", ATTRS{idVendor}=="12d1", ATTRS{idProduct}=="1f01", RUN+="/usr/sbin/usb_modeswitch -v 12d1 -p 1f01 -M '55534243123456780000000000000011062000000101000100000000000000'"

To automatically add a valid nameserver in /etc/resolv.conf when eth1 comes up, add these lines to /etc/dhcp/dhclient.conf:

interface "eth1" {
  prepend domain-name-servers 8.8.8.8;
  request subnet-mask, broadcast-address, time-offset, routers,
          domain-name, domain-name-servers, domain-search, host-name,
          dhcp6.name-servers, dhcp6.domain-search,
          netbios-name-servers, netbios-scope, interface-mtu,
          rfc3442-classless-static-routes, ntp-servers,
          dhcp6.fqdn, dhcp6.sntp-servers;
  require routers, domain-name-servers;
}

If you don’t want to run dhclient manually, you can either add an eth1 dhcp section in /etc/network/interfaces or add the dhclient eth1 command to /etc/rc.local.

Just FYI: I’ve been using Vivid Vervet’s (Ubuntu 15.04) 3.19 kernel in Ubuntu Server 14.04 LTS. Vivid’s newer kernel can be installed using apt-get install linux-image-generic-lts-vivid. Not sure if it makes a difference compared to 14.04’s default kernel though.

Dockerflix: Docker-based SNI proxy for watching U.S. Netflix, Hulu, MTV, Vevo, Crackle, ABC, NBC, PBS…

Recently, I published a new project on Github called Dockerflix. Instead of HAProxy, Dockerflix uses sniproxy. To make the installation a breeze, I boxed the proxy into a Docker container and wrote a small, Python-based Dnsmasq configuration generator. And voilà: DNS-unblocking as a service (DAAS) ;-)

Thanks to sniproxy’s ability to proxy requests based on a wildcard/regex match it’s now so much easier to add support for a service. Now it’s usually enough to just add the main domain name to the proxy and DNS configuration and Dockerflix will be able to hop the geo-fence in most cases. Since most on-demand streaming media providers are using an off-domain CDN for the video stream, only web site traffic gets sent through Dockerflix. A few exceptions may apply though, notably if the video stream itself is geo-fenced.

Dockerflix only handles requests using plain HTTP or TLS using the SNI extension. Some media players don’t support SNI and thus won’t work with Dockerflix.
If you need to proxy plain old SSLv1/v2 for a device, have a look at the non-SNI approach shown in tunlr-style-dns-unblocking.
A few media players (i.e. Chromecast) ignore your DNS settings and always resort to a pre-configured DNS resolver which can’t be changed (it still can be done though by rerouting these requests using iptables).

Check it out: https://github.com/trick77/dockerflix

Free multi-domain SSL certificates from WoSign and HAProxy OCSP stapling

Since everyone now can get free 2-year multi-domain certificates from WoSign, I grabbed one for one of my web sites. However, WoSign’s OCSP server is located in China which may, depending on your and your server’s location, increase latency once the web browser is verifying the certificate’s revocation status. In my case from Europe:

PING ocsp6.wosign.com (111.206.66.61) 56(84) bytes of data.
64 bytes from 111.206.66.61: icmp_seq=1 ttl=53 time=428 ms
64 bytes from 111.206.66.61: icmp_seq=2 ttl=53 time=347 ms
64 bytes from 111.206.66.61: icmp_seq=3 ttl=53 time=312 ms
64 bytes from 111.206.66.61: icmp_seq=4 ttl=53 time=328 ms
64 bytes from 111.206.66.61: icmp_seq=5 ttl=53 time=313 ms

OCSP stapling comes in handy to reduce the latency for the revocation status check, again, depending on your clients and your server’s location.

Here’s the all-in-one shell script in /etc/cron.daily I’m using…

  1. to create the domain’s OCSP file for HAProxy
  2. to inject the latest OCSP data into a running HAProxy instance using its stats socket
#!/bin/sh
ROOT_CERT_FILE=/etc/ssl/private/wosign-root-bundle.crt
SERVER_CERT_FILE=/etc/haproxy/certs.d/domain.crt
HAPROXY_SOCKET=/var/run/haproxy.socket
OCSP_URL=`/usr/bin/openssl x509 -in $SERVER_CERT_FILE -text | grep -i ocsp | cut -d":" -f2-2,3`
OCSP_FILE=${SERVER_CERT_FILE}.ocsp

/usr/bin/openssl ocsp -noverify -issuer $ROOT_CERT_FILE -cert $SERVER_CERT_FILE -url "$OCSP_URL" -respout $OCSP_FILE -header Host `echo "$OCSP_URL" | cut -d"/" -f3`
echo "set ssl ocsp-response $(/usr/bin/base64 -w 10000 $OCSP_FILE)" | socat stdio $HAPROXY_SOCKET

To check if OCSP stapling works:

openssl s_client -connect mydomain.xyz:443 -tls1 -tlsextdebug -status

or for SNI-only configurations:

openssl s_client -connect mydomain.xyz:443 -servername mydomain.xyz -tls1 -tlsextdebug -status

If it works, there should be an OCSP section in the response like this:

OCSP response:
======================================
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = CN, O = WoSign CA Limited, CN = WoSign Free SSL OCSP Responder(G2)
    Produced At: Mar  8 14:01:14 2015 GMT
    .
    .
    .

A few notes:

  1. HAProxy’s stats socket needs to be enabled
  2. wosign-root-bundle.crt was taken from the Apache bundle in the certificate .zip file I received from WoSign
  3. /etc/haproxy/certs.d/domain.crt contains the private key and the certificate bundle from the “for Other Server” directory, however you could remove the last certificate since it’s the root CA cert.
  4. Requires HAProxy >= 1.5
  5. If socat is missing: apt-get install socat in Debian/Ubuntu
  6. Always aim for an A or A+ grade: SSL Server Test

How to install CoreOS on an OVH Kimsufi low-end dedicated server

Wouldn’t it be cool to build a bare-metal high availability cluster using CoreOS and a handful of DDoS-protected, €5/month Kimsufi servers from OVH? Here’s how to install CoreOS on a Kimsufi server.

At the time of this writing, OVH is not providing a CoreOS installation template for the Kimsufi servers. Since there is no virtual KVM console available for the entry level servers,  I tried to use OVH’s iPXE API. This approach would have worked well weren’t it for the CoreOS installer which tries to load binaries in the installation script after overwriting the same partition – which always results in a segfault. Also, the API is only available for the older Kimsufi 2G servers on OVH’s V6 control panel, not for the current Kimsufi servers for which OVH doesn’t provide an API at this time. Fortunately, OVH provides a “rescue mode” which lets us boot from an USB stick which is permanentely plugged in on all Kimsufi servers. Continue reading

strongSwan 5 based IPSec VPN, Ubuntu 14.04 LTS and PSK/XAUTH

I prefer strongSwan over Openswan because it’s still in active development, easier to setup and doesn’t require a L2TP daemon. I prefer a simple IKEv1 setup using PSK and XAUTH over certificates. If you plan to share your VPN server with your friends it’s also a lot easier to setup for them without certificates. I haven’t tried the VPN configuration below with non-Apple clients but it works well with iOS and OS X clients. Make sure to use the Cisco IPSec VPN profile, not the L2TP over IPSec profile you need for Openswan. While strongSwan works well with KVM and Xen containers, it probably won’t work with non-virtualised containers like OpenVZ or LXC. Continue reading

LXC 1.0 Web Panel for Ubuntu 14.04

LXC is awesome! You can create and start your own virtual container with just 3 commands in Ubuntu 14.04.

apt-get install lxc debootstrap lxc-templates
lxc-create -t ubuntu -n demo
lxc-start -n demo -d

It doesn’t get any easier than this. There’s even a Boostrap-based fronted available: LXC Web Panel.

lxc-web-panel

Unfortunately, LXC Web Panel doesn’t work with LXC 1.0 which is part of Ubuntu 14.04. Fortunately though, there’s a fork available on GitHub which adds support for LXC 1.0:

https://github.com/claudyus/LXC-Web-Panel

I re-forked claudyus’ LXC Web Panel fork and added support for Ubuntu 14.04 and a few other things. My forked fork is available here: https://github.com/trick77/LXC-Web-Panel Claudyus has already updated his repository with my changes.

By the way, the original author of LXC Web Panel said he’s currently working on a Bootstrap 3 based version for LXC 1.0 which will include a RESTful API and other new features. Make sure to follow this guy on GitHub: https://github.com/googley

DNS unblocking setup tester

This may help setting up your own DNS unblocking solution:

https://trick77.com/dns-unblocking-setup-tester/

Once everything has been set up properly, all ticks should be green like in this screenshot:

tester

I just pushed another update to GitHub, please make sure to use a configuration generated with the latest generator version or the tester will fail. My main motivation to create this tester was to reduce the amount of support requests I’m receiving. Let’s see how well this goes :-)

Tomcat freezes while starting in Ubuntu 14.04 LTS

After upgrading one of my KVMs to Ubuntu Server 14.04 LTS, Tomcat 7 started to freeze while starting up with:
INFO: Deploying configuration descriptor /etc/tomcat7/Catalina/localhost/ROOT.xml

Only after several minutes, Tomcat generates the following message and starts accepting requests:
INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [295,490] milliseconds.
INFO: Server startup in 296882 ms

If you don’t have a requirement for strong cryptography in Tomcat, you might as well switch to the less secure non-blocking /dev/urandom source instead of /dev/random.
Create a file named setenv.sh in /var/lib/tomcat7/bin make it executable:
#!/bin/sh
export CATALINA_OPTS="-Djava.security.egd=file:/dev/./urandom"

..and restart Tomcat.

It will now restart within seconds thanks to the non-blocking random source.

Supermicro NTP DDoS Vulnerability

I received a notification that one of my dedicated servers was taking part in a NTP based DDoS reflection attack. At first I was like “No way!” since I don’t use NTP on any servers. Closer inspection of the source IP address revealed that the attack was coming from my Supermicro server’s built in IPMI controller. And indeed, Supermicro is using a vulnerable NTP version on its IPMI controllers:

ntpdc -n -c monlist ipmi.mysupermicroserver.com
remote address port local address count m ver rstr avgint lstint
===============================================================================
186.2.161.nnn 53842 76.20.120.nn 51127 7 2 0 0 0
217.147.208.n 123 76.20.120.nn 1 4 4 0 0 7
130.60.204.nn 123 76.20.120.nn 1 4 4 0 0 8

The quickest fix is to turn NTP sync off in IPMI as described here. If for some reason you have a requirement for NTP, here’s how to fix the Supermicro firmware on your own (not for the faint-hearted!).

Since Supermicro has a spotty track record when it comes to IPMI controller security, it’s highly recommended to define a set of jump hosts in the IP Access Control menu. Here’s a gotcha: the default policy is set to ACCEPT which means you have to add a DROP rule at the end with 0.0.0.0/0. Obviously, a private VLAN would be the preferred way, but if no VLAN is available, IP access control comes in handy. The IP access control list will filter any traffic to the IPMI controller except for the defined IP ranges. It will block access to NTP as well.

Still waiting for Supermicro to finally fix the issue in a new firmware revision though…

DNS-unblocking configuration for CBS’s iOS app

I love watching the Late Late Show with Craig Ferguson. Here’s what’s needed to watch CBS content on iPad outside the U.S. using my DNS-unblocking config generator.

latelateshow

   {
      "name": "cbs-akamai-ipad",
      "dest_addr": "ipad-streaming.cbs.com",
      "modes": [
        {
          "port": 80,
          "mode": "http"
        },
        {
          "port": 443,
          "mode": "https"
        }
      ],
      "catchall": true,
      "enabled": true
    }

Add this to config.json, regenerate the configuration files and make sure to upload them to the right places. As with NBC’s iOS app, the video stream itself is geo-fenced which may lead to considerable bandwidth consumption on your VPS server.

By the way, is it just me or does audio quality suck badly in CBS’s iOS app?

DNS-unblocking configuration for NBC’s iOS app

Today, NBC updated its mobile app for iPad and iPhone. The latest version features AirPlay support which means you can watch NBC TV shows on a large TV if you have an Apple TV connected to it. NBC offers free (ad-supported) content in its iOS app, including:

  • The Tonight Show Starring Jimmy Fallon
  • Late Night With Seth Myers
  • About A Boy
  • Grimm
  • The Blacklist
  • The Voice
  • …and many more

photo

In order to use the NBC app on iPad or iPhone outside the U.S., a new configuration entry is required in my non-SNI DNS-unblocking config generator’s config.json file:

    {
      "name": "nbc-ios",
      "dest_addr": "tve_nbc-vh.akamaihd.net",
      "modes": [
        {
          "port": 80,
          "mode": "http"
        },
        {
          "port": 443,
          "mode": "https"
        }
      ],
      "catchall": true,
      "enabled": true
    }

Since NBC uses a geo-fenced video stream from Akamai, the entire content stream has to be proxied through HAProxy which may – depending on your usage – lead to considerable bandwidth usage on your remote VPS server.

Fulltext search for Tiny Tiny RSS (TTRSS) with Sphinx and MySQL in Debian/Ubuntu

I haven’t found a working tutorial on setting up Sphinx fulltext search for the awesome Tiny Tiny RSS Reader (TTRSS) and MySQL in Debian/Ubuntu. So, without further ado, here it is:

apt-get install sphinxsearch

Create /etc/sphinxsearch/sphinx.conf:

source ttrss
{
	type			= mysql
	sql_host		= localhost
	sql_user		= ttrss
	sql_pass		= changeme
	sql_db			= ttrss
	sql_port		= 3306
	sql_query_pre		= SET NAMES utf8

	# UNIX socket name
	# optional, default is empty (reuse client library defaults)
	# usually '/var/lib/mysql/mysql.sock' on Linux
	# usually '/tmp/mysql.sock' on FreeBSD
	#
	# sql_sock		= /var/lib/mysql/mysql.sock

        sql_query		= \
		SELECT int_id AS id, ref_id, UNIX_TIMESTAMP() AS updated, \
 			ttrss_entries.title AS title, link, content, \
                        ttrss_feeds.title AS feed_title, \
                        marked, published, unread, \
                        author, ttrss_user_entries.owner_uid \
                        FROM ttrss_entries, ttrss_user_entries, ttrss_feeds \
                        WHERE ref_id = ttrss_entries.id AND feed_id = ttrss_feeds.id;


	sql_attr_uint		= owner_uid
	sql_attr_uint		= ref_id

	sql_ranged_throttle	= 0

	sql_query_info		= \
		SELECT * FROM ttrss_entries,  \
			ttrss_user_entries WHERE ref_id = id AND int_id=$id


}

source delta : ttrss 
{
        sql_query		= \
                SELECT int_id AS id, ref_id, UNIX_TIMESTAMP() AS updated, \
                        ttrss_entries.title AS title, link, content, \
                        ttrss_feeds.title AS feed_title, \
                        marked, published, unread, \
                        author, ttrss_user_entries.owner_uid \
                        FROM ttrss_entries, ttrss_user_entries, ttrss_feeds \
                        WHERE ref_id = ttrss_entries.id AND feed_id = ttrss_feeds.id \
                        AND ttrss_entries.updated > UNIX_TIMESTAMP() - INTERVAL 24 HOUR;

        sql_query_killlist      = \
		SELECT int_id FROM ttrss_entries, ttrss_user_entries \
                	WHERE ref_id = ttrss_entries.id AND updated > UNIX_TIMESTAMP() - INTERVAL 24 HOUR;

}

index ttrss
{
        source			= ttrss
	path			= /var/lib/sphinxsearch/data/ttrss
	docinfo			= extern
	mlock			= 0
	morphology		= none
	min_word_len		= 1
	charset_type		= utf-8
	min_prefix_len	        = 3
	prefix_fields		= title, content, feed_title, author
	enable_star		= 1
	html_strip		= 1

}

index delta : ttrss 
{
	source			= delta
	path			= /var/lib/sphinxsearch/data/ttrss_delta
}

indexer
{
	mem_limit		= 32M
}

searchd
{
	listen			= 127.0.0.1:9312

	log			= /var/log/sphinxsearch/searchd.log
	query_log		= /var/log/sphinxsearch/query.log
	read_timeout		= 5
	client_timeout		= 300
	max_children		= 30
	pid_file		= /var/run/sphinxsearch/searchd.pid
	max_matches		= 1000
	seamless_rotate		= 1
	preopen_indexes		= 1
	unlink_old		= 1
	mva_updates_pool	= 1M
	max_packet_size		= 8M
	max_filters		= 256
	max_filter_values	= 4096
	compat_sphinxql_magics  = 0
}

Create the indices using indexer --all
Set START=yes in /etc/default/sphinxsearch and start Sphinx using service sphinxsearch start

The last step is to enable Sphinx search in TTRSS:

	// *********************
	// *** Sphinx search ***
	// *********************

	define('SPHINX_ENABLED', true);
	// Enable fulltext search using Sphinx (http://www.sphinxsearch.com)
	// Please see http://tt-rss.org/wiki/SphinxSearch for more information

And that is it. Lighting-fast full-text search in TTRSS! Probably only useful if you have a lot of feeds/articles and you’re keeping them for quite while before purging them to oblivion.