My Linux Home Away of Home
Just a site of stuff that I know or claim to. ;-)

Network Tools April 28, 2017

There are several tools within linux to work with network settings and to help find information about the network that you are on. One thing that you will see if that I have hidden the mac address of my stuff here for this tutorial. The reason is that the mac address is considered to be the physical address of your network interface. If was brought up that it is similar to your home address.


Disclaimer: These should not be used to malicious activity and I do not condone and am not responsible for any malicious act committed by any command shown.


  • ifconfig -a – In the example below, the ether name shows the MAC Address assigned to your network interface which is unique to each card. The inet is the network address given to your network interface in an IPV4 format. The inet 6 is also known as IPV6 and is not used by a lot of internet service providers yet.





  • iwconfig – The iwconfig command give information about the wifi network that you are connected to. The Access Point that I marked through is the MAC address of that access point.



  • sudo ifconfig wlp2s0 promisc – To place a wireless interface in promiscuous mode for monitoring your local wifi network, use the ifconfig command shown with the wireless interface. Keep in mind that you need to do this with sudo as you are making changes to the network interface.


  • sudo ifconfig wlp2s0 -promisc – This command will take you out of promiscuous mode and back to normal wifi operations.


Before the change to promiscuous mode:





After the change to promiscuous mode:








  • route command – The route command in linux shows the kernel routing table information. Under flags, the U is showing up while G is showing Gateway. Show UG is an up gateway.



  • route -n – The route with the -n switch changes the host names in the route table is IP Address instead of showing the actual name itself.



  • route add -net default gw gatewayname dev wlp2s0


  • route -Cn – Shows the cache route table for faster network traffic routing. There may not be any cache available so don’t be concerned if you don’t see anything here.




One thing that become an issue is when someone tries to brute force your machine or network. Most companies have way to deter this but what if you are a home user and don’t have the fancy network firewalls and IDS systems? This will help in taking care of the problem.
These notes were something that I had used from time to time while working in the linux hosting industry which work well. If there is a problem IP Address, just nullroute the IP using route command. Lets say that the IP Address causing problem is, just type following command at your command line.
  • route add gw lo
You can verify it with following command:
  • netstat -nr OR route -n
You can also reject target:
  • route add -host IP-ADDRESS reject
  • route add -host reject
To confirm the null routing status, use ip command as follows:
  • ip route get
Output: RTNETLINK answers: Network is unreachable
Drop entire subnet
  • route add -net gw lo
You can also use ip command to null route network or ip, enter:
  • ip route add blackhole
  • route -n
If you would like to remove a null route or a blocked IP Address, just enter the following:
  • route delete






No Comments on Network Tools
Categories: Uncategorized

DNS Explained – Part 2 (Tools) April 19, 2017

In linux, there are some tools that we use to check what DNS settings that domain name are using. Most linux servers to include Redhat / CentOS / Debian use built in DNS services such as named. The named service is the built in DNS service which control panel such as Plesk and CPanel use to host their DNS settings locally.

Commands Used for DNS Queries:

  • nslookup command – Name Server Lookup Tool for finding the name servers where the zone file is located for the domain you are looking for.




  • dig command – Just using dig with a domain name brings back the IP Address of where the domain lives.





  • whois command – Looks for information about the domain stored at ICANN.





  • host command – The host command is used to do DNS lookups and will convert a domain name to an IP address.









Files used in DNS related queries:


  • /etc/resolv.conf – holds name servers used by server





  • /etc/hosts – holds all host related information. Contains domain names and IP Addresses








Search for domains mail exchanger record:
  • nslookup -type=mx





  • dig mx 





Search for domains A record:
  • nslookup -type=a





  • dig a





Search for domains Name Server record:



  • nslookup -type=ns





  • dig ns





Search for domains CNAME record:



nslookup -type=cname






  • dig cname





Search for domains SPF record:



  • nslookup -type=spf





  • dig spf





List All records for a domain:



  • nslookup -type=any





  • dig any





dig @









When migrating zones from GoDaddy, make sure that everything comes across except for the GoDaddy specific entries i.e. Double or even triple check the information to makes sure that everything needed has been added to the /var/named/ file.
– Verify that all new domains that have been added have the group of named added.
chgrp named /var/named/
– Verify that the named service configuration file does not have errors.
named-checkconf /etc/named.conf
Also check the domain zone files to make sure that there are no errors.
[root@dns01 named]# named-checkzone
zone loaded serial 1389974311
[root@dns01 named]# named-checkzone
zone loaded serial 1389974311
– Reload the named service configuration.
[root@dns01 named]# rndc reload
server reload successful
– Restart the named service.

[root@dns01 named]# service named restart
Stopping named: .                                          [  OK  ]
Starting named:                                            [  OK  ]
– Verify the named service status.
[root@dns01 named]# service named status
version: 9.8.2rc1-RedHat-9.8.2-0.23.rc1.el6_5.1 (Not available)
CPUs found: 2
worker threads: 2
number of zones: 48
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 0/0/1000
tcp clients: 0/100
server is up and running
named (pid  7264) is running…

[root@dns01 ~]# cat /var/named/
$ttl 300  IN      SOA (
                        38400 )  IN      NS  IN      NS

@                               MX      10
@                               TXT     “v-spf1 a mx ~all”
as                              A
sbam                            A
tc                              A
ald                             A
osi                             A
mx                              A
pd                              A
isi                             A
nald                            A
ldsaving                        A
quasar                          A
sat                             A
conectado                       A
nsb                             A
mlld                            A
lds                             A
ctl                             A
peak                            A
cbs                             A
lld                             A
nlds                            A
dld                             A
dp                              A
bnld                            A
bsa                             A
lda                             A
lcr                             A
ceot                            A
ftp                             CNAME
www                             CNAME

[root@dns01 ~]# cat /var/named/
$ttl 300      IN      SOA (
                        38400 )      IN      NS      IN      NS

boss                          A
legent                       A
peak                          A
quasar                      A
telecircuit                A
ftp                             CNAME
www                         CNAME

A few web sites for troubleshooting
No Comments on DNS Explained – Part 2 (Tools)
Categories: DNS Information

Manjaro Mate or Ubuntu 17.04 Mate

Hey guys,

As there have been some issues showing up in the Manjaro / Arch realm, it may be time to make a switch in architectures that may be somewhat more stable. I am still checking through some things, and I totally understand that Arch is bleeding edge but sometimes, depending on what we use the OS for, we may need to step back and take another path. I really do like Arch as I have been able to find most if not all of the packages that I want to use in either the arch community or AUS repositories. But there have been a few issues that have started cropping up are as follows.

  • Dependency issues with packages. An example has to do with the winff and ffmpeg. I have started seeing dependency issues showing up during install. Below shows an install that I was trying to do in OBRevenge for the packages WinFF which needs ffmpeg to run. You can easily see the issue that I highlighted.

  • In order to fix the above issue, I had to manually install the ffmpeg-full-git package using yaourt. If yaourt is not installed, do the following.

  • Once yaourt is installed, go ahead and install ffmpeg-full-git using the following

  • Downstream driver issues. There was an issue about a week or so ago which broke a lot of people desktops which contained nvidia video cards. An update was introduced without warning and several machines refused to boot into a gui and screens went black. This is not good at all.
  • Don’t get me wrong, I really do like Manjaro and actually arch in general. I find that it runs much better on my laptop than Ubuntu, but in order to stay with it, I need to figure out how to get past the dependency issues that all of the sudden cropped up. It is possible that they have been there all a long and I am just now noticing them, but who knows. This is something that we need to live with or figure out while working in Arch.




As you can see below, I have a package called pia-nm which appears to be broken via the AUR repository.

It looks like that I did find a potential fix or work around for the package dependency issue that was cropping up in arch. The following help make the install easier if there is a dependency issue. An example of when I had to use these steps was installing PIA VPN. I have not tried this with ffmpeg yet but need to try it out.

  • packer -G packagename
  • cd packagename
  • makepkg -g >> PKGBUILD
  • makepkg
  • sudo pacman -U packagename.pkg.tar.xz

[kf4bzt@tim-laptop ~]$ packer -G pia-nm

[kf4bzt@tim-laptop ~]$ cd pia-nm

[kf4bzt@tim-laptop pia-nm]$ makepkg -g >> PKGBUILD
==> Retrieving sources…
-> Downloading ca.rsa.4096.crt…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2719 100 2719 0 0 10884 0 –:–:– –:–:– –:–:– 10876
-> Downloading servers…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9431 100 9431 0 0 23204 0 –:–:– –:–:– –:–:– 23229
-> Found process_servers
==> Generating checksums for source files…

[kf4bzt@tim-laptop pia-nm]$ makepkg
==> Making package: pia-nm 24-1 (Tue Apr 18 17:08:29 CDT 2017)
==> Checking runtime dependencies…
==> Checking buildtime dependencies…
==> Retrieving sources…
-> Found ca.rsa.4096.crt
-> Found servers
-> Found process_servers
==> Validating source files with sha512sums…
ca.rsa.4096.crt … Passed
servers … Passed
process_servers … Passed
==> Extracting sources…
==> Starting prepare()…
PIA username (pNNNNNNN): Enter username here
==> Entering fakeroot environment…
==> Starting package()…
==> Tidying install…
-> Removing libtool files…
-> Purging unwanted files…
-> Removing static library files…
-> Stripping unneeded symbols from binaries and libraries…
-> Compressing man and info pages…
==> Checking for packaging issue…
==> Creating package “pia-nm”…
-> Generating .PKGINFO file…
-> Generating .BUILDINFO file…
-> Generating .MTREE file…
-> Compressing package…
==> Leaving fakeroot environment.
==> Finished making: pia-nm 24-1 (Tue Apr 18 17:09:21 CDT 2017)

[kf4bzt@tim-laptop pia-nm]$ sudo pacman -U pia-nm-24-1-x86_64.pkg.tar.xz
loading packages…
resolving dependencies…
looking for conflicting packages…

Packages (1) pia-nm-24-1

Total Installed Size: 0.04 MiB

:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring [######################] 100%
(1/1) checking package integrity [######################] 100%
(1/1) loading package files [######################] 100%
(1/1) checking for file conflicts [######################] 100%
(1/1) checking available disk space [######################] 100%
:: Processing package changes…
(1/1) installing pia-nm [######################] 100%




The issue with Ubuntu is that not all packages are available and you either have to find PPA’s or download directly from the developers site. This can be a pain in the rear when you need something right then. Luckily, I haven’t ran into the issue of needing something yesterday.


As several apps are not available in the repository, below are links to some that I use.

Wavebox (Replacement for wmail) –

kaption (Was able to install in Manjaro and OBRevenge, but requires certain KDE files in Ubuntu to be able to install) –

slack –

zoom –

angryip –

etcher –



No Comments on Manjaro Mate or Ubuntu 17.04 Mate
Categories: Uncategorized

Free Certificates Through

One thing that I found cool while in training is how SSL certificates could be going free with a service called letsencrypt. The paid certs are still around $75 a year which is not bad at all, but for us that don’t have the funds to spend or don’t have secure content the free SSL is a great way to go. The certificates need to be renewed every 6 months but it is still the way to go when saving customers money with their web hosting packages. Some customers would rather use paid SSL services when they have some major secure connection, this may not be worth it. it is up to the customer.


The links below take you to the content for an awesome project





Below is from the certbot documentation on installing this upon different platforms:


Operating System Packages

Arch Linux

sudo pacman -S certbot


If you run Debian Stretch or Debian Sid, you can install certbot packages.

sudo apt-get update
sudo apt-get install certbot python-certbot-apache

If you don’t want to use the Apache plugin, you can omit the python-certbot-apache package.

Packages exist for Debian Jessie via backports. First you’ll have to follow the instructions at to enable the Jessie backports repo, if you have not already done so. Then run:

sudo apt-get install certbot python-certbot-apache -t jessie-backports


sudo dnf install certbot python2-certbot-apache


  • Port: cd /usr/ports/security/py-certbot && make install clean
  • Package: pkg install py27-certbot


The official Certbot client is available in Gentoo Portage. If you want to use the Apache plugin, it has to be installed separately:

emerge -av app-crypt/certbot
emerge -av app-crypt/certbot-apache

When using the Apache plugin, you will run into a “cannot find a cert or key directive” error if you’re sporting the default Gentoo httpd.conf. You can fix this by commenting out two lines in /etc/apache2/httpd.conf as follows:


<IfDefine SSL>
LoadModule ssl_module modules/


#<IfDefine SSL>
LoadModule ssl_module modules/

For the time being, this is the only way for the Apache plugin to recognise the appropriate directives when installing the certificate. Note: this change is not required for the other plugins.


  • Build from source: cd /usr/pkgsrc/security/py-certbot && make install clean
  • Install pre-compiled package: pkg_add py27-certbot


  • Port: cd /usr/ports/security/letsencrypt/client && make install clean
  • Package: pkg_add letsencrypt

Other Operating Systems

OS packaging is an ongoing effort. If you’d like to package Certbot for your distribution of choice please have a look at the Packaging Guide.





The following example are for a Debian 8 server that I have. Make sure that you have port 443 open and accessible.


root@timknowsstuff-vm:~# sudo apt-get install python-certbot-apache -t jessie-backports

root@timknowsstuff-vm:~# a2enmod ssl 
Considering dependency setenvif for ssl: 
Module setenvif already enabled 
Considering dependency mime for ssl: 
Module mime already enabled 
Considering dependency socache_shmcb for ssl: 
Enabling module socache_shmcb. 
Enabling module ssl. 
See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates. 
To activate the new configuration, you need to run: service apache2 restart
root@timknowsstuff-vm:~# a2ensite default-ssl
Enabling site default-ssl.
To activate the new configuration, you need to run:
  service apache2 reload
root@timknowsstuff-vm:~# systemctl restart apache2
root@timknowsstuff-vm:~# netstat -paunt | grep apache2
tcp        0      0   *               LISTEN      31195/apache2   
tcp        0      0    *               LISTEN      31195/apache2   

root@timknowsstuff-vm:~# certbot --apache


Below shows the options within the certbot application:

root@timknowsstuff-vm:~# certbot ?
  certbot [SUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates.  By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

  (default) run        Obtain & install a cert in your current webserver
  certonly             Obtain cert, but do not install it (aka "auth")
  install              Install a previously obtained cert in a server
  renew                Renew previously obtained certs that are near expiry
  revoke               Revoke a previously obtained certificate
  register             Perform tasks related to registering with the CA
  rollback             Rollback server configuration changes made during install
  config_changes       Show changes made to server config during installation
  plugins              Display information about installed plugins
No Comments on Free Certificates Through
Categories: Uncategorized

DNS Explained – Part One April 12, 2017

During a training session yesterday, we had a presentation about DNS that made perfect since. Here are some points that came out of the training which I think everyone can use.


-What does DNS stand for? Depending on who you Goggle or ask it usually will be Domain Name Service


-What does DNS do? DNS connects the domain name to an IP Address


-DNS is like the phone book of the internet. When a query is made on a domain name, the search is trying to find the IP Address associated with the domain name. This is similar to your cell phone contacts list. You see a list a contact which point to a phone number to make contact.


-ICANN is the master DNS system – They run how the DNS works


-The reason for needing access to DNS when hosting a web site or application is that there is a possibility that your IP Address may change and you need to make sure that there is no downtime, or the least amount of downtime possible,


-What is a URL? A URL has a protocol such as http, https, ftp. These tell what type of communications that you are trying to accomplish such as http – unsecure web traffic, https – secure web traffic, ftp – file transfer.


-What is a Subdomain? A subdomain can be broken down into smaller parts for the parent domain name. If you look in a DNS control panel, you will see designations such as www, mail, store, docs, etc. These are considered subdomains as they point to other sections or pages of the parent domain.


-What is a Top Level Domain (TLD)? The top level domain information is basically the last part of the domain name. For example, .edu, .com, .net, and .org. These represent what type of site that you have created.


- = URL
- = subdomain, .com, .net, .org = tld (Top Level Domain)


-What are DNS resolvers? DNS resolvers do the phone book lookup which takes the domain name and locates the IP Address that is assigned to that domain name.


-What are name servers? The name servers are used to do the queries to locate the IP Address of the website. Name servers use zone files which include the IPAddress and where it needs to go.


-What are some of the DNS Record types used?


An A-record (address record) maps a hostname to an IP Address.



An AAAA-record (address record) maps a hostname to an IPv6 Address.



A CNAME (canonical name) record maps a host name to another hostname or FQDN.

-**A CNAME is NOT a redirect. It is an alias**

-**Do Not CNAME a parent domain. You will break the zone file.**



A MX record is the mail exchanger record which maps the domain to a particular address with a priority. The lower the priority number, i.e. 10, 20, 30, etc. the higher the priority that the exchanger has.



A TXT (text) record is used to hold some text information. You can put virtually any free text you want within a TXT record. A TXT record has a hostname so that you can assign the free text to a particular hostname/zone. The most common use for TXT records is to store SPF (sender policy framework) records and to prevent emails being faked to appear to have been sent from you.



An NS (name server) record allows you to delegate a subdomain of your domain to another name server.



An SPF record is a Sender Policy Framework record. An SPF record is actually a specific type of TXT record.



An SPF record is used to stop people receiving forged email. By adding an SPF record into your DNS configuration any mail servers receiving email, that is allegedly from you, will check that the email has come from a trusted source. The trusted sources are provided by the SPF record that you set up.




Use dig with a DNS server IP. In the example I used Google to do a search.






Below is a quick how to on how DNS moves its information from the browser to the hosting server:


-1. Type domain name into browser
-2. Browser does not know IP of domain name so it looks at the resolver for information
-3. Resolver talks to a bunch of NAME Servers until it finds the one that has a ZONE FILE for the domain name.
-4. The resolver reads the ZONE FILE to learn the IP ADDRESS of the domain name
-5. The RESOLVER then tells my computer/browser the IP ADDRESS for the domain name
-6. Apache is read and the content is sent back to the local browser.


So basically, it was explained very simply with the following:

When you go to a web site, the domain name needs to be registered. Once registered, there will need to be name server entries added at the registrar showing where the domain lives.


Registrar  –>  Name Servers  –>  Zone File  –>  IP Address





TTL – Time To Live:

The TTL tells the browser how long it must keep the web site information until it goes back out for new web site content. The TTL can be set from 5 min to 24 hours depending on the provider and if you need a change to go quickly, set that level to the lowest it can go. By setting the lower you can also see a greater load on the DNS side. The TTL change is done within the zone file.

Domain Name Registrar:

The domain name registrar is used to store and retrieve information about a domain name such as contact information about the owner and when the domain name will expire. This information is pulled and sent to ICANN as well.




Here is a brief description on the DNS Resolution process:

– Each domain name has a name server attached in order for internet browsers to find the correct location of the domain.

– Each domain contains an IP Address which is given at the server side that the web service lives on.

– At the registrar of the domain, the name servers are added as ns, ns1, ns2 etc while the domain name to IP address is added as an A record.

– When an application needs to resolve the domain name, it looks at the name servers to be able to resolve the information. For example, in linux, the nslookup command is used to resolve the name and IP address.

– Basically from the client side, you type in a browser, the domain name you are wanting to visit. The browser will check the local or client resolver which will be cached data. The local cached data may come from a local hosts file or bind services.

– If the client side does not get anything back, the client will question a preferred DNS server which will include the,, etc. When the DNS server gets a query, it will check its local zone files to see if it can give an answer back. If it can not find the information needed in the local zone files, it will go to the local cached data to see what it can find. If the DNS servers can not complete the query, it will try to do a recursive search to fully resolve the domain name.


No Comments on DNS Explained – Part One
Categories: DNS Information

Geeetech Printer and Windows April 3, 2017


I am on my way to being 100% free of Windows for my projects. I did get my 3D Printer to work with Manjaro Arch Linux by doing one simple thing that I had not thought of. All that is needed once the software is installed is to make sure that your user name is part of the uucp group. In Ubuntu, it is a different group, but arch looks like it requires the user to have access to uucp. Once I did that, the OS was able to connect to the printer and heat the extruder and print bed to temp. I am very excited to have this part working now. 😀





Good Morning Guys,

I feel a little disappointed and puzzled at the same time. The puzzled part comes from not being able to get my Geeetech Prusa i3x printer to work inside of Manjaro Linux. Within dmesg the printer shows up as it is supposed to but none of the application appear to see it. I tried, repetier-host and slic3r which normally always see it and work well as long as the bad rate is set to 25000 and the port assignment is set to automatic, then it should work but no go. I tried the Prusa version which is a better release of application and seemed to be newer with the same results and of course, the new EasyPrint from Geeetech will not run in Linux, go So after a lot of aggravation, I switched my laptop to Windows to make this work. 🙁 I am disappointed that I had to do this but it very strange that I could not get it to work.

One thing that I did try in order to keep from making this drastic change was trying to use Octoprint on a Raspi and Astroprint on a Raspi. I had great success with Octoprint in the past but after the printer being down for some time during the move and not having time to fire it back up, even Octoprint quit talking. Astroprint is another system such as Octoprint for allowing remote communications to your 3D printer through either a remote web interface or through the local comm ports. Both apps are great and do awesome stuff. I would highly recommend trying them out if you have a spare Raspi around.

After the aggravation of not getting this to work, I decided to add Windows 🙁 to my laptop. I found that the Prusa version of software for Windows includes Slic3r for both 1.75 and 3mm extruder machines and Pronterface. This package works real well and honestly made a huge difference in a few test PLA and ABS prints. I installed the 1.75mm Slic3r package that they have and by using it to slice the object and have Pronterface print, the prints came out much better than before. While the idea of Geeetech having their own package for the Prusa i3 printers, the Geeetech EasyPrint app still has issues in connecting to my Prusa i3x and has issues in making connection to the internet to do update to the app and the print.

Lesson learned here is that each operating system has a place and my desktop will stay Manjaro Linux but in order to use my 3D printer, my laptop needs to be more flexible.

No Comments on Geeetech Printer and Windows
Categories: 3D Printing