Archive for category DNS
How to make redundant multihomed anycast IPv4/IPv6 DNS cloud using djbdns/dbndns and tinydns on Debian
Posted by admin in DNS, Networking on 27 June, 2010
At firs you need N+1 servers which will be parts of the cloud. On each server you need at least 2 NICs which will be connected to two different routers.
//You also need a IP range for anycast purpose, the best is to use /23 so you have enough IP space and no problem with BGP filtering across multiple Internet exchange peerings or you can use smaller if you know how to setup it.
From the range you need IP for the DNS service. Two for DNS recursor and two for DNS cache. For each server you need also 3 unique IPs from this range. 2 for each NIC of two NICs and one for identify loopback. .
For example for 4 server you need together (4×3)+4=16 IPs. It is also possible to use anycast range only for loopback and NIC interfaces will have IP from another range depending on the location, but the configuration will be more complicated when adding another server to cloud. So it is better to use only one range.
This article is using the imaginary example IP range 8.8.8.0/24 which is google using for there anycast servers.
On each server you need to install:
- quagga routing software (or any other routing daemon)
- djbdns/dbndns and tinydns (or any other DNS daemon)
- iptables with DNAT support
# apt-get install quagga dbndns dnscache-run iptables
In our example we use 8.8.8.0/24 as the anycast range in which all IPs are used. We have two Cisco 7600 routers which are on two geological different sites.
Cisco 7600 crs-01 configuration needed for the anycast configuration:
interface GigabitEthernet1/9
description ### ENI_R | ANTI_IT_0030 | xxx | eth1-dns21 ###
ip address 8.8.8.129 255.255.255.252
no ip redirects
no ip unreachables
no ip proxy-arp
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 7 0968415C
ip ospf network point-to-point
ip ospf hello-interval 1
ip ospf dead-interval 5
load-interval 30
ipv6 address 2A02:131:8888:FE01::1/64
ipv6 enable
ipv6 nd ra suppress
no ipv6 pim
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 5
ipv6 ospf database-filter all out
ipv6 ospf 16 area 8
no cdp enable
end
interface GigabitEthernet1/10
description ### ENI_R | ANTI_IT_0031 | xxx | eth1-dns22 ###
ip address 8.8.8.133 255.255.255.252
no ip redirects
no ip unreachables
no ip proxy-arp
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 7 0968415C
ip ospf network point-to-point
ip ospf hello-interval 1
ip ospf dead-interval 5
load-interval 30
ipv6 address 2A02:131:8888:FE02::1/64
ipv6 enable
ipv6 nd ra suppress
no ipv6 pim
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 5
ipv6 ospf database-filter all out
ipv6 ospf 16 area 8
no cdp enable
end
interface GigabitEthernet1/11
description ### ENI_R | ANTI_IT_0032 | xxx | eth1-dns11 ###
ip address 8.8.8.137 255.255.255.252
no ip redirects
no ip unreachables
no ip proxy-arp
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 7 0968415C
ip ospf network point-to-point
ip ospf hello-interval 1
ip ospf dead-interval 5
load-interval 30
ipv6 address 2A02:131:8888:FE03::1/64
ipv6 enable
ipv6 nd ra suppress
no ipv6 pim
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 5
ipv6 ospf database-filter all out
ipv6 ospf 16 area 8
no cdp enable
end
interface GigabitEthernet1/12
description ### ENI_R | ANTI_IT_0033 | xxx | eth1-dns12 ###
ip address 8.8.8.141 255.255.255.252
no ip redirects
no ip unreachables
no ip proxy-arp
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 7 0968415C
ip ospf network point-to-point
ip ospf hello-interval 1
ip ospf dead-interval 5
load-interval 30
ipv6 address 2A02:131:8888:FE04::1/64
ipv6 enable
ipv6 nd ra suppress
no ipv6 pim
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 5
ipv6 ospf database-filter all out
ipv6 ospf 16 area 8
no cdp enable
endrouter ospf 10
router-id 1.2.3.4
ispf
log-adjacency-changes detail
auto-cost reference-bandwidth 100000
area 8 authentication message-digest
area 8 stub no-summary
timers throttle spf 10 100 5000
timers throttle lsa 10 100 5000
redistribute connected subnets
redistribute static subnets
passive-interface default
no passive-interface GigabitEthernet1/9
no passive-interface GigabitEthernet1/10
no passive-interface GigabitEthernet1/11
no passive-interface GigabitEthernet1/12
network 8.8.8.129 0.0.0.0 area 8
network 8.8.8.133 0.0.0.0 area 8
network 8.8.8.137 0.0.0.0 area 8
network 8.8.8.141 0.0.0.0 area 8
distribute-list prefix OSPF_DENY out
bfd all-interfaces
!
ip prefix-list OSPF_DENY seq 10 permit 0.0.0.0/0 le 32
ipv6 router ospf 16
router-id 8.8.8.1
log-adjacency-changes detail
auto-cost reference-bandwidth 100000
passive-interface default
no passive-interface GigabitEthernet1/9
no passive-interface GigabitEthernet1/10
no passive-interface GigabitEthernet1/11
no passive-interface GigabitEthernet1/12
For the crs-02 router the configuration is almost the same but the ip addresses of interfaces are changed according to the server configuration.
Configuration needed on server dns21:
cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 8.8.8.194
netmask 255.255.255.252
iface eth0 inet6 static
address 2a02:131:8888:fd01::194
netmask 64
gateway 2a02:131:8888:fd01::1
auto eth1
iface eth1 inet static
address 8.8.8.130
netmask 255.255.255.252
iface eth1 inet6 static
address 2a02:131:8888:fe01::130
netmask 64
gateway 2a02:131:8888:fe01::1
auto lo:7
iface lo:7 inet static
address 8.8.8.7
netmask 255.255.255.255
up ip -6 addr add 2a02:131:1:8888::7/128 dev lo:7
auto lo:77
iface lo:77 inet static
address 8.8.8.77
netmask 255.255.255.255
up ip -6 addr add 2a02:131:1:8888::77/128 dev lo:77
auto lo:8
iface lo:8 inet static
address 8.8.8.8
netmask 255.255.255.255
up ip -6 addr add 2a02:131:1:8888::8/128 dev lo:8
auto lo:88
iface lo:88 inet static
address 8.8.8.88
netmask 255.255.255.255
up ip -6 addr add 2a02:131:1:8888::88/128 dev lo:88
auto lo:100
iface lo:100 inet static
address 8.8.8.21
netmask 255.255.255.255
up ip -6 addr add 2a02:131:1:8888::21/128 dev lo:100
dns-cache21:~# iptables-save
# Generated by iptables-save v1.4.2 on Fri Jun 25 11:10:33 2010
*filter
:INPUT ACCEPT [685482823:68925443107]
:FORWARD ACCEPT [22:1681]
:OUTPUT ACCEPT [734294686:70957724473]
COMMIT
# Completed on Fri Jun 25 11:10:33 2010
# Generated by iptables-save v1.4.2 on Fri Jun 25 11:10:33 2010
*nat
:PREROUTING ACCEPT [24359827:1724945664]
:POSTROUTING ACCEPT [294052022:18879912475]
:OUTPUT ACCEPT [294052012:18879911514]
-A PREROUTING -d 8.8.8.8/32 -i eth1 -p tcp -m tcp –dport 53 -j DNAT –to-destination 88.212.8.130:53
-A PREROUTING -d 8.8.8.8/32 -i eth1 -p udp -m udp –dport 53 -j DNAT –to-destination 88.212.8.130:53
-A PREROUTING -d 8.8.8.8/32 -i eth0 -p tcp -m tcp –dport 53 -j DNAT –to-destination 88.212.8.194:53
-A PREROUTING -d 8.8.8.8/32 -i eth0 -p udp -m udp –dport 53 -j DNAT –to-destination 88.212.8.194:53
-A PREROUTING -d 8.8.8.88/32 -i eth1 -p tcp -m tcp –dport 53 -j DNAT –to-destination 88.212.8.130:53
-A PREROUTING -d 8.8.8.88/32 -i eth1 -p udp -m udp –dport 53 -j DNAT –to-destination 88.212.8.130:53
-A PREROUTING -d 8.8.8.88/32 -i eth0 -p tcp -m tcp –dport 53 -j DNAT –to-destination 88.212.8.194:53
-A PREROUTING -d 8.8.8.88/32 -i eth0 -p udp -m udp –dport 53 -j DNAT –to-destination 88.212.8.194:53
COMMIT
The DNAT si needed for translating anycast request from cleint to defined recursor based on source interface and destination IP, because djbdns can`t listen on one IP addres and make recursive request from another IP. If you will use BIND for recursor you can do it without this iptables trick and only with one DNS server insteat of two.
On each server are running two IPv4 dns recursor dnscache.
dns-cache21:/etc/sv# cat dnscache/env/IP dnscache/env/IPSEND dnscache/env/ROOT dnscache/env/CACHESIZE dnscache2/env/IP dnscache2/env/IPSEND dnscache2/env/ROOT dnscache2/env/CACHESIZE
8.8.8.194
8.8.8.194
/etc/sv/dnscache/root
1572864000
8.8.8.130
8.8.8.130
/etc/sv/dnscache2/root
1572864000
cat tinydns/env/IP tinydns/env/ROOT tinydns2/env/IP tinydns2/env/ROOT tinydns-ipv6/env/IP tinydns-ipv6/env/ROOT tinydns2-ipv6/env/IP tinydns2-ipv6/env/ROOT
::ffff:8.8.8.7
/etc/sv/tinydns/root
::ffff:8.8.8.77
/etc/sv/tinydns/root
2a02:131:1:8888::7
/etc/sv/tinydns/root
2a02:131:1:8888::77
/etc/sv/tinydns/root
As you can see, we have 4 authoritative DNS server 2xIPv4 and 2xIPv6 but shared ROOT enviroment for easier management. There is no need for iptables, because authoritative DNS uses only one IP address which is the loopback IP address.
The quagga configuration on server looks like this:
dns-cache21:~# cat /etc/quagga/ospfd.conf
hostname ospfd
password secretpassword
log file /var/log/quagga/ospfd.log
service advanced-vty
nterface eth0
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 Dn5
ip ospf network point-to-point
ip ospf hello-interval 1
ip ospf dead-interval 5
!
interface eth1
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 Dn5
ip ospf network point-to-point
ip ospf hello-interval 1
ip ospf dead-interval 5
!
router ospf
router-id 8.8.8.194
network 8.8.8.7/32 area 8
network 8.8.8.77/32 area 8
network 8.8.8.8/32 area 8
network 8.8.8.88/32 area 8
network 8.8.8.21/32 area 8
network 8.8.8.194/30 area 8
network 8.8.8.130/30 area 8
area 8 stub
area 8 authentication message-digest
!
log stdout
dns-cache21:~# cat /etc/quagga/ospf6d.conf
hostname ospf6d@plant
password secretpass
log stdout
log file /var/log/quagga/ospf6d.log
service advanced-vty
!
debug ospf6 neighbor state
!
interface eth1
ipv6 ospf6 hello-interval 1
ipv6 ospf6 dead-interval 5
!
interface eth0
ipv6 ospf6 hello-interval 1
ipv6 ospf6 dead-interval 5
!
interface lo
!
router ospf6
router-id 8.8.8.21
interface eth1 area 0.0.0.8
interface eth0 area 0.0.0.8
interface lo area 0.0.0.8
After that you should have working full redundant multihomed anycast DNS cloud. As next you should make some statistics about how well is your cloud working and devolep some external decentralized monitoring tool for testing your services and act proactive according to that. The dnscache and tinydns are running using sv daemon, so that manage the restarting of service going down, but you have to monitor your configuration mistakes.