DNS Explained – Part 2 (Tools)

In linux, there are some tools that we use to check what DNS settings that domain name are using. Most linux servers to include Redhat / CentOS / Debian use built in DNS services such as named. The named service is the built in DNS service which control panel such as Plesk and CPanel use to host their DNS settings locally.

Commands Used for DNS Queries:

  • nslookup command – Name Server Lookup Tool for finding the name servers where the zone file is located for the domain you are looking for.




  • dig command – Just using dig with a domain name brings back the IP Address of where the domain lives.





  • whois command – Looks for information about the domain stored at ICANN.





  • host command – The host command is used to do DNS lookups and will convert a domain name to an IP address.









Files used in DNS related queries:


  • /etc/resolv.conf – holds name servers used by server





  • /etc/hosts – holds all host related information. Contains domain names and IP Addresses








Search for domains mail exchanger record:
  • nslookup -type=mx domain.com





  • dig mx google.com 





Search for domains A record:
  • nslookup -type=a domain.com





  • dig a domain.com





Search for domains Name Server record:



  • nslookup -type=ns domain.com





  • dig ns domain.com





Search for domains CNAME record:



nslookup -type=cname domain.com






  • dig cname domain.com





Search for domains SPF record:



  • nslookup -type=spf domain.com





  • dig spf google.com





List All records for a domain:



  • nslookup -type=any domain.com





  • dig google.com any





dig @ domain.com









When migrating zones from GoDaddy, make sure that everything comes across except for the GoDaddy specific entries i.e. domaincontrol.com. Double or even triple check the information to makes sure that everything needed has been added to the /var/named/domain.com.hosts file.
– Verify that all new domains that have been added have the group of named added.
chgrp named /var/named/domain.com.conf
– Verify that the named service configuration file does not have errors.
named-checkconf /etc/named.conf
Also check the domain zone files to make sure that there are no errors.
[root@dns01 named]# named-checkzone directdns.com directdns.com.hosts
zone directdns.com/IN: loaded serial 1389974311
[root@dns01 named]# named-checkzone domain1.com domain1.com.hosts
zone domain1.com/IN: loaded serial 1389974311
– Reload the named service configuration.
[root@dns01 named]# rndc reload
server reload successful
– Restart the named service.

[root@dns01 named]# service named restart
Stopping named: .                                          [  OK  ]
Starting named:                                            [  OK  ]
– Verify the named service status.
[root@dns01 named]# service named status
version: 9.8.2rc1-RedHat-9.8.2-0.23.rc1.el6_5.1 (Not available)
CPUs found: 2
worker threads: 2
number of zones: 48
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 0/0/1000
tcp clients: 0/100
server is up and running
named (pid  7264) is running…

[root@dns01 ~]# cat /var/named/domain1.com.hosts
$ttl 300
domain1.com.  IN      SOA     dns01.domain2.com. postmaster.domain2.com (
                        38400 )
domain1.com.  IN      NS      dns01.domain2.com.
domain1.com.  IN      NS      dns02.domain2.com.

@                               MX      10      mx.domain1.com.
@                               TXT     “v-spf1 a mx include:subdomain.domain3.com incluide:authsmtp.com ~all”
as                              A
sbam                            A
tc                              A
ald                             A
osi                             A
mx                              A
pd                              A
isi                             A
nald                            A
ldsaving                        A
quasar                          A
sat                             A
conectado                       A
nsb                             A
mlld                            A
lds                             A
ctl                             A
peak                            A
cbs                             A
lld                             A
nlds                            A
dld                             A
dp                              A
bnld                            A
bsa                             A
lda                             A
lcr                             A
ceot                            A
ftp                             CNAME   domain1.com
www                             CNAME   domain1.com

[root@dns01 ~]# cat /var/named/directdns.com.hosts
$ttl 300
directdns.com.      IN      SOA     dns01.domain2.com. postmaster.domain2.com (
                        38400 )
directdns.com.      IN      NS      dns01.domain2.com.
directdns.com.      IN      NS      dns02.domain2.com.

boss                          A
legent                       A
peak                          A
quasar                      A
telecircuit                A
ftp                             CNAME   directdns.com
www                         CNAME   directdns.com

A few web sites for troubleshooting

Manjaro Mate or Ubuntu 17.04 Mate

Hey guys,

As there have been some issues showing up in the Manjaro / Arch realm, it may be time to make a switch in architectures that may be somewhat more stable. I am still checking through some things, and I totally understand that Arch is bleeding edge but sometimes, depending on what we use the OS for, we may need to step back and take another path. I really do like Arch as I have been able to find most if not all of the packages that I want to use in either the arch community or AUS repositories. But there have been a few issues that have started cropping up are as follows.

  • Dependency issues with packages. An example has to do with the winff and ffmpeg. I have started seeing dependency issues showing up during install. Below shows an install that I was trying to do in OBRevenge for the packages WinFF which needs ffmpeg to run. You can easily see the issue that I highlighted.

  • In order to fix the above issue, I had to manually install the ffmpeg-full-git package using yaourt. If yaourt is not installed, do the following.

  • Once yaourt is installed, go ahead and install ffmpeg-full-git using the following

  • Downstream driver issues. There was an issue about a week or so ago which broke a lot of people desktops which contained nvidia video cards. An update was introduced without warning and several machines refused to boot into a gui and screens went black. This is not good at all.
  • Don’t get me wrong, I really do like Manjaro and actually arch in general. I find that it runs much better on my laptop than Ubuntu, but in order to stay with it, I need to figure out how to get past the dependency issues that all of the sudden cropped up. It is possible that they have been there all a long and I am just now noticing them, but who knows. This is something that we need to live with or figure out while working in Arch.




As you can see below, I have a package called pia-nm which appears to be broken via the AUR repository.

It looks like that I did find a potential fix or work around for the package dependency issue that was cropping up in arch. The following help make the install easier if there is a dependency issue. An example of when I had to use these steps was installing PIA VPN. I have not tried this with ffmpeg yet but need to try it out.

  • packer -G packagename
  • cd packagename
  • makepkg -g >> PKGBUILD
  • makepkg
  • sudo pacman -U packagename.pkg.tar.xz

[kf4bzt@tim-laptop ~]$ packer -G pia-nm

[kf4bzt@tim-laptop ~]$ cd pia-nm

[kf4bzt@tim-laptop pia-nm]$ makepkg -g >> PKGBUILD
==> Retrieving sources…
-> Downloading ca.rsa.4096.crt…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2719 100 2719 0 0 10884 0 –:–:– –:–:– –:–:– 10876
-> Downloading servers…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9431 100 9431 0 0 23204 0 –:–:– –:–:– –:–:– 23229
-> Found process_servers
==> Generating checksums for source files…

[kf4bzt@tim-laptop pia-nm]$ makepkg
==> Making package: pia-nm 24-1 (Tue Apr 18 17:08:29 CDT 2017)
==> Checking runtime dependencies…
==> Checking buildtime dependencies…
==> Retrieving sources…
-> Found ca.rsa.4096.crt
-> Found servers
-> Found process_servers
==> Validating source files with sha512sums…
ca.rsa.4096.crt … Passed
servers … Passed
process_servers … Passed
==> Extracting sources…
==> Starting prepare()…
PIA username (pNNNNNNN): Enter username here
==> Entering fakeroot environment…
==> Starting package()…
==> Tidying install…
-> Removing libtool files…
-> Purging unwanted files…
-> Removing static library files…
-> Stripping unneeded symbols from binaries and libraries…
-> Compressing man and info pages…
==> Checking for packaging issue…
==> Creating package “pia-nm”…
-> Generating .PKGINFO file…
-> Generating .BUILDINFO file…
-> Generating .MTREE file…
-> Compressing package…
==> Leaving fakeroot environment.
==> Finished making: pia-nm 24-1 (Tue Apr 18 17:09:21 CDT 2017)

[kf4bzt@tim-laptop pia-nm]$ sudo pacman -U pia-nm-24-1-x86_64.pkg.tar.xz
loading packages…
resolving dependencies…
looking for conflicting packages…

Packages (1) pia-nm-24-1

Total Installed Size: 0.04 MiB

:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring [######################] 100%
(1/1) checking package integrity [######################] 100%
(1/1) loading package files [######################] 100%
(1/1) checking for file conflicts [######################] 100%
(1/1) checking available disk space [######################] 100%
:: Processing package changes…
(1/1) installing pia-nm [######################] 100%




The issue with Ubuntu is that not all packages are available and you either have to find PPA’s or download directly from the developers site. This can be a pain in the rear when you need something right then. Luckily, I haven’t ran into the issue of needing something yesterday.


As several apps are not available in the repository, below are links to some that I use.

Wavebox (Replacement for wmail) – https://wavebox.io/download

kaption (Was able to install in Manjaro and OBRevenge, but requires certain KDE files in Ubuntu to be able to install) – https://www.linux-apps.com/content/show.php/Kaption?content=139302

slack – https://slack.com/downloads/linux

zoom – https://www.zoom.us/download

angryip – http://angryip.org/download/#linux

etcher – https://etcher.io



Free Certificates Through letsencrypt.org

One thing that I found cool while in training is how SSL certificates could be going free with a service called letsencrypt. The paid certs are still around $75 a year which is not bad at all, but for us that don’t have the funds to spend or don’t have secure content the free SSL is a great way to go. The certificates need to be renewed every 6 months but it is still the way to go when saving customers money with their web hosting packages. Some customers would rather use paid SSL services when they have some major secure connection, this may not be worth it. it is up to the customer.


The links below take you to the content for an awesome project








Below is from the certbot documentation on installing this upon different platforms:


Operating System Packages

Arch Linux

sudo pacman -S certbot


If you run Debian Stretch or Debian Sid, you can install certbot packages.

sudo apt-get update
sudo apt-get install certbot python-certbot-apache

If you don’t want to use the Apache plugin, you can omit the python-certbot-apache package.

Packages exist for Debian Jessie via backports. First you’ll have to follow the instructions at http://backports.debian.org/Instructions/ to enable the Jessie backports repo, if you have not already done so. Then run:

sudo apt-get install certbot python-certbot-apache -t jessie-backports


sudo dnf install certbot python2-certbot-apache


  • Port: cd /usr/ports/security/py-certbot && make install clean
  • Package: pkg install py27-certbot


The official Certbot client is available in Gentoo Portage. If you want to use the Apache plugin, it has to be installed separately:

emerge -av app-crypt/certbot
emerge -av app-crypt/certbot-apache

When using the Apache plugin, you will run into a “cannot find a cert or key directive” error if you’re sporting the default Gentoo httpd.conf. You can fix this by commenting out two lines in /etc/apache2/httpd.conf as follows:


<IfDefine SSL>
LoadModule ssl_module modules/mod_ssl.so


#<IfDefine SSL>
LoadModule ssl_module modules/mod_ssl.so

For the time being, this is the only way for the Apache plugin to recognise the appropriate directives when installing the certificate. Note: this change is not required for the other plugins.


  • Build from source: cd /usr/pkgsrc/security/py-certbot && make install clean
  • Install pre-compiled package: pkg_add py27-certbot


  • Port: cd /usr/ports/security/letsencrypt/client && make install clean
  • Package: pkg_add letsencrypt

Other Operating Systems

OS packaging is an ongoing effort. If you’d like to package Certbot for your distribution of choice please have a look at the Packaging Guide.





The following example are for a Debian 8 server that I have. Make sure that you have port 443 open and accessible.


root@timknowsstuff-vm:~# sudo apt-get install python-certbot-apache -t jessie-backports

root@timknowsstuff-vm:~# a2enmod ssl 
Considering dependency setenvif for ssl: 
Module setenvif already enabled 
Considering dependency mime for ssl: 
Module mime already enabled 
Considering dependency socache_shmcb for ssl: 
Enabling module socache_shmcb. 
Enabling module ssl. 
See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates. 
To activate the new configuration, you need to run: service apache2 restart
root@timknowsstuff-vm:~# a2ensite default-ssl
Enabling site default-ssl.
To activate the new configuration, you need to run:
  service apache2 reload
root@timknowsstuff-vm:~# systemctl restart apache2
root@timknowsstuff-vm:~# netstat -paunt | grep apache2
tcp        0      0   *               LISTEN      31195/apache2   
tcp        0      0    *               LISTEN      31195/apache2   

root@timknowsstuff-vm:~# certbot --apache


Below shows the options within the certbot application:

root@timknowsstuff-vm:~# certbot ?
  certbot [SUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates.  By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

  (default) run        Obtain & install a cert in your current webserver
  certonly             Obtain cert, but do not install it (aka "auth")
  install              Install a previously obtained cert in a server
  renew                Renew previously obtained certs that are near expiry
  revoke               Revoke a previously obtained certificate
  register             Perform tasks related to registering with the CA
  rollback             Rollback server configuration changes made during install
  config_changes       Show changes made to server config during installation
  plugins              Display information about installed plugins

DNS Explained – Part One

During a training session yesterday, we had a presentation about DNS that made perfect since. Here are some points that came out of the training which I think everyone can use.


-What does DNS stand for? Depending on who you Goggle or ask it usually will be Domain Name Service


-What does DNS do? DNS connects the domain name to an IP Address


-DNS is like the phone book of the internet. When a query is made on a domain name, the search is trying to find the IP Address associated with the domain name. This is similar to your cell phone contacts list. You see a list a contact which point to a phone number to make contact.


-ICANN is the master DNS system – They run how the DNS works


-The reason for needing access to DNS when hosting a web site or application is that there is a possibility that your IP Address may change and you need to make sure that there is no downtime, or the least amount of downtime possible,


-What is a URL? A URL has a protocol such as http, https, ftp. These tell what type of communications that you are trying to accomplish such as http – unsecure web traffic, https – secure web traffic, ftp – file transfer.


-What is a Subdomain? A subdomain can be broken down into smaller parts for the parent domain name. If you look in a DNS control panel, you will see designations such as www, mail, store, docs, etc. These are considered subdomains as they point to other sections or pages of the parent domain.


-What is a Top Level Domain (TLD)? The top level domain information is basically the last part of the domain name. For example, .edu, .com, .net, and .org. These represent what type of site that you have created.


-http://www.google.com/search = URL
-http://search.google.com = subdomain
-.edu, .com, .net, .org = tld (Top Level Domain)


-What are DNS resolvers? DNS resolvers do the phone book lookup which takes the domain name and locates the IP Address that is assigned to that domain name.


-What are name servers? The name servers are used to do the queries to locate the IP Address of the website. Name servers use zone files which include the IPAddress and where it needs to go.


-What are some of the DNS Record types used?


An A-record (address record) maps a hostname to an IP Address.



An AAAA-record (address record) maps a hostname to an IPv6 Address.



A CNAME (canonical name) record maps a host name to another hostname or FQDN.

-**A CNAME is NOT a redirect. It is an alias**

-**Do Not CNAME a parent domain. You will break the zone file.**



A MX record is the mail exchanger record which maps the domain to a particular address with a priority. The lower the priority number, i.e. 10, 20, 30, etc. the higher the priority that the exchanger has.



A TXT (text) record is used to hold some text information. You can put virtually any free text you want within a TXT record. A TXT record has a hostname so that you can assign the free text to a particular hostname/zone. The most common use for TXT records is to store SPF (sender policy framework) records and to prevent emails being faked to appear to have been sent from you.



An NS (name server) record allows you to delegate a subdomain of your domain to another name server.



An SPF record is a Sender Policy Framework record. An SPF record is actually a specific type of TXT record.



An SPF record is used to stop people receiving forged email. By adding an SPF record into your DNS configuration any mail servers receiving email, that is allegedly from you, will check that the email has come from a trusted source. The trusted sources are provided by the SPF record that you set up.




Use dig with a DNS server IP. In the example I used Google to do a search.






Below is a quick how to on how DNS moves its information from the browser to the hosting server:


-1. Type domain name into browser
-2. Browser does not know IP of domain name so it looks at the resolver for information
-3. Resolver talks to a bunch of NAME Servers until it finds the one that has a ZONE FILE for the domain name.
-4. The resolver reads the ZONE FILE to learn the IP ADDRESS of the domain name
-5. The RESOLVER then tells my computer/browser the IP ADDRESS for the domain name
-6. Apache is read and the content is sent back to the local browser.


So basically, it was explained very simply with the following:

When you go to a web site, the domain name needs to be registered. Once registered, there will need to be name server entries added at the registrar showing where the domain lives.


Registrar  –>  Name Servers  –>  Zone File  –>  IP Address





TTL – Time To Live:

The TTL tells the browser how long it must keep the web site information until it goes back out for new web site content. The TTL can be set from 5 min to 24 hours depending on the provider and if you need a change to go quickly, set that level to the lowest it can go. By setting the lower you can also see a greater load on the DNS side. The TTL change is done within the zone file.

Domain Name Registrar:

The domain name registrar is used to store and retrieve information about a domain name such as contact information about the owner and when the domain name will expire. This information is pulled and sent to ICANN as well.




Here is a brief description on the DNS Resolution process:

– Each domain name has a name server attached in order for internet browsers to find the correct location of the domain.

– Each domain contains an IP Address which is given at the server side that the web service lives on.

– At the registrar of the domain, the name servers are added as ns, ns1, ns2 etc while the domain name to IP address is added as an A record.

– When an application needs to resolve the domain name, it looks at the name servers to be able to resolve the information. For example, in linux, the nslookup command is used to resolve the name and IP address.

– Basically from the client side, you type in a browser, the domain name you are wanting to visit. The browser will check the local or client resolver which will be cached data. The local cached data may come from a local hosts file or bind services.

– If the client side does not get anything back, the client will question a preferred DNS server which will include the ns.domain.com, ns2.domain.com, etc. When the DNS server gets a query, it will check its local zone files to see if it can give an answer back. If it can not find the information needed in the local zone files, it will go to the local cached data to see what it can find. If the DNS servers can not complete the query, it will try to do a recursive search to fully resolve the domain name.


Geeetech Printer and Windows


I am on my way to being 100% free of Windows for my projects. I did get my 3D Printer to work with Manjaro Arch Linux by doing one simple thing that I had not thought of. All that is needed once the software is installed is to make sure that your user name is part of the uucp group. In Ubuntu, it is a different group, but arch looks like it requires the user to have access to uucp. Once I did that, the OS was able to connect to the printer and heat the extruder and print bed to temp. I am very excited to have this part working now. 😀





Good Morning Guys,

I feel a little disappointed and puzzled at the same time. The puzzled part comes from not being able to get my Geeetech Prusa i3x printer to work inside of Manjaro Linux. Within dmesg the printer shows up as it is supposed to but none of the application appear to see it. I tried, repetier-host and slic3r which normally always see it and work well as long as the bad rate is set to 25000 and the port assignment is set to automatic, then it should work but no go. I tried the Prusa version which is a better release of application and seemed to be newer with the same results and of course, the new EasyPrint from Geeetech will not run in Linux, go figure..lol. So after a lot of aggravation, I switched my laptop to Windows to make this work. 🙁 I am disappointed that I had to do this but it very strange that I could not get it to work.

One thing that I did try in order to keep from making this drastic change was trying to use Octoprint on a Raspi and Astroprint on a Raspi. I had great success with Octoprint in the past but after the printer being down for some time during the move and not having time to fire it back up, even Octoprint quit talking. Astroprint is another system such as Octoprint for allowing remote communications to your 3D printer through either a remote web interface or through the local comm ports. Both apps are great and do awesome stuff. I would highly recommend trying them out if you have a spare Raspi around.

After the aggravation of not getting this to work, I decided to add Windows 🙁 to my laptop. I found that the Prusa version of software for Windows includes Slic3r for both 1.75 and 3mm extruder machines and Pronterface. This package works real well and honestly made a huge difference in a few test PLA and ABS prints. I installed the 1.75mm Slic3r package that they have and by using it to slice the object and have Pronterface print, the prints came out much better than before. While the idea of Geeetech having their own package for the Prusa i3 printers, the Geeetech EasyPrint app still has issues in connecting to my Prusa i3x and has issues in making connection to the internet to do update to the app and the print.

Lesson learned here is that each operating system has a place and my desktop will stay Manjaro Linux but in order to use my 3D printer, my laptop needs to be more flexible.

ffmpeg and codecs

Hey Guys,

I want to talk about the install of the video compression software called ffmpeg and its frontend app WinFF. The ffmpeg application is used for converting video and audio files from one format to another. Usually if you’re looking for better quality products or smaller file sizes, this will work for that. The ffmpeg website states, “FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created. It supports the most obscure ancient formats up to the cutting edge.”

The WinFF application is the graphical frontend app for the ffmpeg applicationto change the encoding on a video or audio file that you created to make it smaller or change the overall qualify as well. When I installed Manjaro 17 Mate, I noticed something that was a little off. There were some issues installing the apps through AUR that needed to be fixed. The biggest issue was the the gpg keys were not being accepted from the packages so I had to manually add them by doing the following.

  • Create a gpg configuration file in your home folder locate in ~/.gnupg/gpg.conf
  • Add this to the gpg.conf file that you created, without the “”. “keyring /etc/pacman.d/gnupg/pubring.gpg”
  • Now, when your get the error about the key issue, do the following changing the example to the key that you see in the error. Try to run the following as your regular user but if needed issues sudo before the commands to add as the root level user.
    • pacman-key -r 919464515CCF8BB3
    • pacman-key -f 919464515CCF8BB3
    • pacman-key –lsign-key 919464515CCF8BB3
    • gpg –recv-keys 919464515CCF8BB3
    • gpg –edit-key 919464515CCF8BB3
      • trust
      • Choose full or ultimate
      • type quit once complete

Once you may have to do this with other applications but now you see how easy it is. I’m not sure why this is showing up now and in Mate, but the fix will work.



Something else that I noticed was an error during the installation of the ffmpeg-full application and codec is that jni which appears to be part of a java package is causing installation issues.

According to the following URL, the jni issue was removed in the git version of ffmpeg. The backend app and codecs should now install with no problems.


Just install ffmpeg-full-git from the AUR repository as you do with other applications. Depending on the speed of the system, the installation will take some time to complete.



I just found something out which is interesting. There is still a broken codec plugin called libvo-aacenc. It sounds like this one is not worth messing with as it is not located in the community or AUR repositories. It has been recommended to install libfdk-aac instead which ffmpeg will need to be recompiled with it. Now, with that being said, I was able to find libfdk-aac in AUR and it installed without any issues but it did rebuilt ffmpeg on its own. I’m not sure what was removed and readded when it comes to codecs as it removed the version of ffmpeg all together. Just by watching the install process, it is possible that the majority may have been reinstalled.

In order to make this change, you will need to do the following:

  • Open WinFF
  • Click on Edit
  • Select Presets
  • Choose the MPEG-4 codec
  • Select either MPEG-4 720p or MPEG4-1080p and change the library from libvo_aacenc to libfdk_aac
  • Click on Add/Update to make sure that the changes took
  • Click Save or an extra good measure
  • Click on Close


  • To test, Select MPEG-4 under Convert to:
  • Select MPEG-4 720p or MPEG-4 1080p under preset
  • Make sure you have your video selected
  • Click Convert
  • If everything is set correctly, you should see a terminal open and the conversion process should run

There is still an issue even with using libfdk_aac as the video is not coming out like its supposed to. I am going to look into this a little deeper to see what is going on. I am also trying Handbrake to see how that works with converting videos to different formats.


From a post that I found:

“As of FFmpeg 3.0 (Feb 2016), libvo-aac has been removed from FFmpeg because the built-in AAC encoder has superior quality and is no longer experimental. It is suggested to use the built-in encoder (-c:a aac) or libfdk-aac (-c:a libfdk_aac) instead.”

To look for the aac codecs, use the command, ffmpeg -codecs.

I took the libvo_aacenc codec from MPEG-4 and replaced it with the stock aac and the app worked. From what I have been reading on this subject, the libvo_aacenc codec has been removed from the repositories as a poor quality product and it was recommended to use one of the aac codecs instead. For a list of available codecs, see the links shown at the bottom of the page.






3D Printing tips for printing with the Prusa i3x Printer

I am adding a tips that I have come across of found out on my own while using my 3D printer. 3D printing is not easy and take time but it can be rewarding in the end.

  • Print about 4 to 5 loops to get the filament to start and to adhere to the heat bed. Not needed all the time, maybe for testing bed leveling. This may or may not be necessary depending on the type of adhesion that you are using on your heat bed. Personally, with PLA, I use blue painters tape and the purple glue stick. With ABS, I use ABS juice that I put together with ABS pieces and some acetone mixed into a glass or metal container. I read that you need to allow the ABS juice to dry during the heating process and the part will still stick. I will try that part out.
  • Make sure that the bed leveling is spot on. Some people use calipers, some use a ruler to measure the distance between the print nozzle and the heat bed while others use automated bed leveling. I normally use calipers to measure between the Y axis bed frame and the heat bed to get things levels.
  • Don’t over tighten your X and Y axis belt idlers. If you over tighten the idler, you will over tighten the belts which will cause tension issues with the motors and cause skipping in the printer and bad prints and eventually burn out the motors.
  • Make sure that you have extra parts on hand such as bearings, bushings, spare belt, screws, nuts, washers, etc.
  • Depending on what filament material that you will be using you need to make sure that you have painters paint, glue stick, ABS juice, anything else you may need to help the parts stick to the heat bed. Down this page shows some information about filament and includes a link to some other information.
  • Speaking of heat bed, you need to make sure that the temp of the heat is correct for the material that you will be using as well. For instance, PLA would work just fine between 55 and 65 degree Celsius on the heat bed and 230 to 235 degree Celsius on the extruder. ABS needs to be around 80 to 100 degrees Celsius on the heat bed and around 230 to 250 degrees Celsius on the extruder.  Below is an example of how someone of thingiverse users their printer with ABS filament.



This is not mine, but I may have to try it out anyway.  😀

  1. I use a aluminium plate instead of the original glass plate, with same dimensions and thickness
  2. I heat up the plate to about 240F and spread the complete plate with a thin layer of hot glue. I use these sticks that are made for hot glue guns.
  3. I cover the plate with some piece of cardboard for about 10 minutes (or longer). This gives the glue time to spread evenly
  4. I start printing with plate temperature about 200F.
  5. After the second layer I reduce plate temperature to 70F. The model is glued to the printing plate very firmly, no warping
  6. When print is finished, I heat up the printing plate to 200F again – glue gets soft again, and I can remove the model.



I found the following at https://filaments.ca/pages/temperature-guide which works well as a guide on what the temps should be. Keep in mind that not all filament are created equal so the temps will differ from product to product but should be very similar. Check out the URL above as there are other filament types to choose from.


Filament Type   |   Extruder Temp  |  Comments


PLA (Original & Creative Series) 215°C – 235°C
  • PLA can be printed both with and without a heated print bed, but if your desktop 3D printer does have a heated print bed it is recommended to set your print bed temperature to approximately 60°C – 80°C.
  • First layer usually 5°C-10°C higher than subsequent layers.
  • Glow in the dark use 5°C-10°C higher.
  • Sticks well to Blue painter’s tape.
  • Sticks well to extra strong hair spray.
  • Sticks well with “ABS Juice” (scrap ABS filament dissolved in acetone)
ABS (Original & Creative Series) 230°C – 240°C
  • Heated print bed recommended. Set your print bed temperature to approximately 80°C – 100°C. After the first few layers, it’s best to turn down your print bed temperature a bit.
  • Glow in the dark ABS use 250°C
  • Sticks well to Polyimide/Kapton tape, PET tape, Blue tape.
  • Sticks well to extra strong hair spray.
  • Sticks well with “ABS Juice” (scrap ABS filament dissolved in acetone).




You know, where there’s a will, there’s a way. That holds true in 3D printing. If there is something that you are missing for a project, just make it and print it out. For example, I want to add dual extrusion to my printer. What this means is that I will be able to print with two different color filaments at the same time or two different types of filament as well. Now, the problem is that I do have two exact extruders but I am missing the bracket that holds them together in the mount. Well, there is a guy that I know on thingaverse that goes by his ham call sign of KB3LNN which created a dual extruder mount for the same extruders that I have and he created the mount bracket and add it because I said that I was missing it. That is awesome. He didn’t have to but did out of the kindness of his heart.

Now to print this bad boy out using my ABS filament. ABS is heat resistant and will allow this mount to work real well.


There is also another mount which appears to be a good one as well. This set was created by a gentleman called lukie80 on thingaverse and has multiple capabilities. For this project, we just need the two sections below, the first link being the carriage system and the second one being the dual extruder mount.  I will still have to print the bracket from the project above but it will all come together real well.






I need to come up with some changes for my Prusa i3x that works better on the Z Axis. One thing that is noticed since these are the acrylic frame printers, the weight is a little on the heavy side with the extruder in place as well. One thing that I was wondering is if it is possible to separate the motors placing them on top of the printer somewhere and placing the hotends on the extruder mount. There are printable parts to help fix the issues which if I print them in ABS, they will be strong enough to work for quite some time and actually be lighter than the acrylic material used on the printer to begin with. Some of things that I wanted to work on are the bearings or bushing to make sure that I have the right diameter ones. The X and Y Axis pulley bearings that keep wearing out. I have tried to go with printed parts for this and they work better than the microbearings that are added with the kits. I have gone through bearings like water going through me. LOL








http://www.thingiverse.com/thing:2202854 – You may want to bookmark or download these







Linux Backups (Are They Needed??) – Part 2 – BackInTime

Now that we have a good snapshot backup application installed we need to make sure that our personal files such as documents, pictures, video, etc are backed up as well. Since TimeShift takes care of the operating system side an application called backintime will take care of the rest. This has very similar capabilities as TimeShift such as full and incremental backups but with TimeShift, from what I can tell, the backups stay on the local machine where with backintime you can tell the app where to place the backups. Backintime also uses the rsync command directly in doing its backups and restores.




The installation process is identical to the the way we install most packages. Below are the steps that can be used to install the backintime backup / restore application.


Now for the BackInTime installation from Pamac (Add / Remove Software Application):

  • Open the Pamac application (Add / Remove Software Application)
  • Type in backintime in the search
  • Click on the AUR tab
  • Select backintime
  • Click Apply
  • A popup will show that there will be dependencies that need to be resolved. Click on the Commit button
  • Enter the sudo password to elevate permission for the installer
  • Just sit back and let the installer finish.
  • Once the install is complete just close the Pamac application
  • Now you have a working version of backintime installed and ready to go.

If you choose to do this from a terminal, here you go:

  • Make sure that yaourt is installed by issue sudo pacman -S yaourt
  • Yaourt is the command line app to work with the AUR repository
  • Remember that yaourt complains about running in root. Run from your local account with sudo access.
  • Open a terminal and type sudo yaourt -S backintime
  • You will need to give your sudo password to elevate for the installation
  • If prompted to edit files, just say no unless you know what you’re doing
  • If prompted to install packages, just say yes
  • Once the installation is complete, you will have a fully operational version of backintime


Keep in mind that you can create and run the backups and restore either from the command line as shown below or from the desktop using the backintime application.




[kf4bzt@tim-pc ~]$ backintime –help
usage: backintime [-h] [–config PATH] [–debug]
[–profile NAME | –profile-id ID] [–quiet] [–version]

Back In Time – a simple backup tool for Linux.

optional arguments:
-h, –help show this help message and exit
–config PATH Read config from PATH.
–debug Increase verbosity.
–profile NAME Select profile by NAME.
–profile-id ID Select profile by ID.
–quiet Be quiet. Suppress messages on stdout.
–version, -v show backintime’s version number.
–license show backintime’s license.


backup – Take a new snapshot. Ignore if the profile is not
scheduled or if the machine runs on battery.
backup-job – Take a new snapshot in background only if the profile
is scheduled and the machine is not on battery. This
is use by cron jobs.
benchmark-cipher – Show a benchmark of all ciphers for ssh transfer.
check-config – Check the profiles configuration and install crontab
decode – Decode pathes with ‘encfsctl decode’
last-snapshot – Show the ID of the last snapshot.
last-snapshot-path – Show the path of the last snapshot.
pw-cache – Control Password Cache for non-interactive cronjobs.
remove – Remove a snapshot.
remove-and-do-not-ask-again – Remove snapshots and don’t ask for confirmation
before. Be careful!
restore – Restore files.
snapshots-list – Show a list of snapshots IDs.
snapshots-list-path – Show the path’s to snapshots.
snapshots-path – Show the path where snapshots are stored.
unmount – Unmount the profile.

For backwards compatibility commands can also be used with trailing ‘–‘. All
listed arguments will work with all commands. Some commands have extra
arguments. Run ‘backintime <COMMAND> -h’ to see the extra arguments.



Instead of posting all of the possible settings for the configuration file, I have attached a file with them in it.







If moving a configuration from one machine to another. Make sure that you change the hostname and make sure that the backup folder has been created. Once that is done, run the following to check the config file.


[kf4bzt@tim-pc ~]$ backintime check-config

Back In Time
Version: 1.1.14

Back In Time comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; type `backintime –license’ for details.
│ Check/prepair snapshot path │
Check/prepair snapshot path: done

│ Check config │
Check config: done

│ Install crontab │
ERROR: Failed to get crontab lines: 1, no crontab for kf4bzt

Install crontab: done

Config /home/kf4bzt/.config/backintime/config profile ‘Main profile’ is fine.




Now that the configuration is setup and ready, go ahead and try the first backup run.


[kf4bzt@tim-pc ~]$ backintime backup

Back In Time
Version: 1.1.14

Back In Time comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; type `backintime –license’ for details.

INFO: Lock
WARNING: Inhibit Suspend failed.
INFO: Take a new snapshot. Profile: 1 Main profile
INFO: Call rsync to take the snapshot
INFO: Save config file
INFO: Save permissions
INFO: Create info file
INFO: Remove backups older than: 20170220-000000
INFO: Keep min free disk space: 10240 MiB
INFO: Keep min 2% free inodes
INFO: Unlock




As you can see, the backup was successful and created the folder with the content, if any.


[kf4bzt@tim-pc ~]$ ls -alh ./Backups/backintime/tim-pc/kf4bzt/1/
total 12K
drwxr-xr-x 3 kf4bzt kf4bzt 4.0K Mar 22 15:19 .
drwxr-xr-x 3 kf4bzt kf4bzt 4.0K Mar 22 15:18 ..
dr-xr-xr-x 3 kf4bzt kf4bzt 4.0K Mar 22 15:19 20170322-151819-785
lrwxrwxrwx 1 kf4bzt kf4bzt 19 Mar 22 15:19 last_snapshot -> 20170322-151819-785





As I set Back In Time for Ubuntu Mate, I took some screenshots to show what this would look like. Back In Time can either be used as a standalone backup solution or as a supplement to the TimeShift backup agent for pulling use home folders and other things that may not be pulled from the original snapshot. As shown in the first screenshot, the Back In Time application can be used as a full system backup solution. When you click on yes, the settings get changed to make the app capable of doing a full system backup.



As we look into the settings, most can be left as default but there are some that do need to be modified in order for you to have the backups that you need when you need them. First first item to change will be where you want your snapshots to be located. Just create folder and point to it with Back In Time. You will also need to set a schedule for backup snapshots to be taken. I set mine as everyday at midnight local.



The Include tab allows you to choose either files or folders or even both if you like to be included in the regular snapshot. I chose most of the folders within my home directory as TimeShift does not include /home in its snapshots.



The excludes tab allows you to skip certain files or folders but I have left this at its default setting as I think what is had will work for now. This will be good for excluding certain data from a full snapshot that you do not need to be backed up.



The next tab called auto-remove, allows you to set how often you want snapshots to be removed within a certain time limit, drive space limit, as well as getting down to how many inodes left before removing snapshots. I just left this at the default settings for now. I may tweak it more a little later.



The items under the option tab was left at their defaults as well. Most of this is self explanatory for the most part.



And finally, the expert options tab, I just left this one as is as well.



Now that we have our settings the right way, Back In Time will take you to the main screen where you can kick off your first snapshot. In the upper left corner is an icon that looks like a harddrive with an arrow pointing down. Hit that button to start a manual backup of the files and folders shown in the middle of the screen.



After you hit the button, you will see data moving at the bottom of the screen. This just shows what is being backed up and what percentage is complete from each item.



If you are wondering what is being backed up and if the files and or folders are include a change or just informational, you can see the logs by going to the top of the screen, clicking on View and selecting view last log. This will show you everything that was backed up the last time and what its status was.


Linux Backups (Are They Needed??) – Part 1 – TimeShift

Hey Guys,

To answer the question above, YES. Backups are needed when dealing with any computer operating system. There are many backup solutions out there that conduct backups in different ways. Some do full and incremental backups, some do bare metal type backups while some issue full and incremental system snapshots. Well, in this small post, I want to go over an app that I found for Manjaro and any other linux distro called TimeShift.



TimeShift is very similar to the MacOS Timemachine and the Windows built in backup and restore app for snapshots. This app has turned out to be an awesome package and a necessity for my linux system at home. Below is a description of the application from the development page on what this app is. I like the fact that it uses rsync as part of its operations. This makes for a good way to make sure that backups stay up to date and if something happens during the backup or restore process, the rsync side should be able to pick up where it left off.

“TimeShift is a system restore utility which takes incremental snapshots of the system using rsync and hard-links. These snapshots can be restored at a later date to undo all changes that were made to the system after the snapshot was taken. Snapshots can be taken manually or at regular intervals using scheduled jobs.”

Here is another statement that I thought would be appropriate here as well.

“TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. If you need a tool to backup your documents and files please take a look at the excellent BackInTime application which is more configurable and provides options for saving user files.”

To install TimeShift within Manjaro, you can do it one of two ways. Just in case, make sure that you have AUR initiated in the Pamac application:

  • To initialize AUR, open the Pamac application (Add / Remove Software Application)
  • Click on the button in the upper right that looks like three lines on top of each other.
  • Click preferences
  • Give the sudo password if asked
  • Click on the AUR tab
  • Click on Enable AUR Support
  • Select both Search in AUR by default and Check for Updates in AUR
  • Close the window

Now for the TimeShift install Process from Pamac (Add / Remove Software Application):

  • Open the Pamac application (Add / Remove Software Application)
  • Type in timeshift in the search
  • Click on the AUR tab
  • Select timeshift
  • Click Apply
  • A popup will show that there will be dependencies that need to be resolved. Click on the Commit button
  • Enter the sudo password to elevate permission for the installer
  • Just sit back and let the installer finish.
  • Once the install is complete just close the Pamac application
  • Now you have a working version of TimeShift installed and ready to go.

If you choose to do this from a terminal, here you go:

  • Make sure that yaourt is installed by issue sudo pacman -S yaourt
  • Yaourt is the command line app to work with the AUR repository
  • Remember that yaourt complains about running in root. Run from your local account with sudo access.
  • Open a terminal and type sudo yaourt -S timeshift
  • You will need to give your sudo password to elevate for the installation
  • If prompted to edit files, just say no unless you know what you’re doing
  • If prompted to install packages, just say yes
  • Once the installation is complete, you will have a fully operational version of TimeShift

Now that you have a working version of TimeShift installed and ready, go ahead and run the applications. There is an initial configuration process that you can set for your usage. Once that is done, click the create button. This will kick off an initial snapshot process and depending on the size of the hard drive, this process can take a little time. Once the initial process is complete, you can create incremental snapshots to be used as restore points within your system.

Here shows the help page for the timeshift command line application. The application is easy to use as you can probably tell.


[kf4bzt@tim-pc ~]$ timeshift –help

Timeshift v17.2 by Tony George (teejeetech@gmail.com)


timeshift –check
timeshift –create [OPTIONS]
timeshift –restore [OPTIONS]
timeshift –delete-[all] [OPTIONS]
timeshift –list-{snapshots|devices} [OPTIONS]


–list[-snapshots] List snapshots
–list-devices List devices

–check Create snapshot if scheduled
–create Create snapshot (even if not scheduled)
–comments <string> Set snapshot description
–tags {O,B,H,D,W,M} Add tags to snapshot (default: O)

–restore Restore snapshot
–clone Clone current system
–snapshot <name> Specify snapshot to restore
–target[-device] <device> Specify target device
–grub[-device] <device> Specify device for installing GRUB2 bootloader
–skip-grub Skip GRUB2 reinstall

–delete Delete snapshot
–delete-all Delete all snapshots

–snapshot-device <device> Specify backup device (default: config)
–yes Answer YES to all confirmation prompts
–btrfs Switch to BTRFS mode (default: config)
–rsync Switch to RSYNC mode (default: config)
–debug Show additional debug messages
–verbose Show rsync output (default)
–quiet Hide rsync output
–help Show all options


timeshift –list
timeshift –list –snapshot-device /dev/sda1
timeshift –create –comments “after update” –tags D
timeshift –restore
timeshift –restore –snapshot ‘2014-10-12_16-29-08’ –target /dev/sda1
timeshift –delete –snapshot ‘2014-10-12_16-29-08’
timeshift –delete-all


1) –create will always create a new snapshot
2) –check will create a snapshot only if a scheduled snapshot is due
3) Use –restore without other options to select options interactively
4) UUID can be specified instead of device name
5) Default values will be loaded from app config if options are not specified




To create a backup from the command line:

  • Type sudo timeshift –create
  • As this is a first time run, it will say “First run mode (config file not found)”
  • This will create the initial full snapshot of the operating system
  • Below is an example of the full system snapshot run




[kf4bzt@tim-pc ~]$ sudo timeshift –create
First run mode (config file not found)
Selected default snapshot type: RSYNC
Selected default snapshot device: /dev/sda1
Estimating system size…
Creating new snapshot…(RSYNC)
Saving to device: /dev/sda1, mounted at path: /
Synching files with rsync…

Created control file: /timeshift/snapshots/2017-03-22_11-03-47/info.json
Parsing log file…

RSYNC Snapshot saved successfully (845s)
Tagged snapshot ‘2017-03-22_11-03-47’: ondemand
Added cron task: /etc/cron.d/timeshift-hourly
Added cron task: /etc/cron.d/timeshift-boot




I issued the following to list the snapshots on the machine so far


[kf4bzt@tim-pc ~]$ sudo timeshift –list
[sudo] password for kf4bzt:
Device : /dev/sda1
UUID : 138fcf48-a8ea-49cd-aa1a-57f2a6a981c7
Path : /
Mode : RSYNC
Device is OK
1 snapshots, 129.5 GB free

Num Name Tags Description
0 > 2017-03-22_11-03-47 O




For the sake of testing I reran the create to kick off an incremental snapshot


[kf4bzt@tim-pc ~]$ sudo timeshift –create
Creating new snapshot…(RSYNC)
Saving to device: /dev/sda1, mounted at path: /
Linking from snapshot: 2017-03-22_11-03-47
Synching files with rsync…
Created control file: /timeshift/snapshots/2017-03-22_11-19-36/info.json
Parsing log file…
RSYNC Snapshot saved successfully (13s)
Tagged snapshot ‘2017-03-22_11-19-36’: ondemand




Here is another list with the initial and incremental snapshots in place


[kf4bzt@tim-pc ~]$ sudo timeshift –list
Device : /dev/sda1
UUID : 138fcf48-a8ea-49cd-aa1a-57f2a6a981c7
Path : /
Mode : RSYNC
Device is OK
2 snapshots, 129.4 GB free

Num Name Tags Description
0 > 2017-03-22_11-03-47 O
1 > 2017-03-22_11-19-36 O




To do a restore of the snapshot just issue the following


[kf4bzt@tim-pc ~]$ sudo timeshift –restore –snapshot ‘2017-03-22_11-19-36’
To restore with default options, press the ENTER key for all prompts!

Press ENTER to continue…

Re-install GRUB2 bootloader? (recommended) (y/n): y

Select GRUB device:

Num Device Description
0 > sda ATA ST3160023AS [MBR]
1 > sda1 ext4, 150.6 GB GB
3 > sdc ATA ST3160023AS [MBR]

[ENTER = Default (/dev/sda), a = Abort]

Enter device name or number (a=Abort): 0

GRUB Device: /dev/sda

Data will be modified on following devices:

Device Mount
——— —–
/dev/sda1 /
Please save your work and close all applications.
System will reboot after files are restored.

This software comes without absolutely NO warranty and the author takes no responsibility for any damage arising from the use of this program. If these terms are not acceptable to you, please do not proceed beyond this point!

Continue with restore? (y/n): y
Mounted ‘/dev/sda1’ at ‘/mnt/timeshift/restore/’
Backup Device: /dev/sda1
Snapshot: 2017-03-22_11-19-36 ~
Restoring snapshot…
Synching files with rsync…

Please do not interrupt the restore process!
System will reboot after files are restored

building file list … done
.d..t…… mnt/
.d..t…… timeshift/
>f..t…… var/log/journal/f035dd48f4eb41d0ba36ad8c9879b1bd/system.journal
.d..t…… var/log/timeshift/

sent 24,747,157 bytes received 49 bytes 16,498,137.33 bytes/sec
total size is 7,407,772,695 speedup is 299.34

Re-installing GRUB2 bootloader…
Installing for i386-pc platform.

Installation finished. No error reported.

Updating GRUB menu…
Generating grub configuration file …
Found background: /usr/share/grub/background.png
Found Intel Microcode image
Found linux image: /boot/vmlinuz-4.4-x86_64
Found initrd image: /boot/initramfs-4.4-x86_64.img
Found initrd fallback image: /boot/initramfs-4.4-x86_64-fallback.img
Found Windows 7 (loader) on /dev/sdb1
Found memtest86+ image: /boot/memtest86+/memtest.bin

Synching file systems…
Rebooting system…
Failed to read reboot parameter file: No such file or directory





While installing within Ubuntu Mate 17, I created the following screenshots to show what TimeShift should look like from the beginning. The first screenshot starts the configuration of Timeshift. I have been leaving this as RSYNC as I find it works better when creating backups for your data.



You need to choose a drive to place the snapshots onto for storage. As you can see, my laptop only has the one drive so I selected sda1 to store the snapshots.



Now, we have to choose how we want the snapshots to be complete. The default is Boot at 5 and Daily at 5. I added Weekly at 3 just to play around with the settings. Keep in mind that your machine has to be powered on for this work or not in sleep mode.



The next screenshot is for creating Includes and Excludes but if you want a full system snapshot them leave this at default. Keep in mind that the snapshots change only if there are changes to the file system. This does not include the items within the users home folders. You will need an app such as Back In Time to backup the home folders and content.



The screenshot shown below show a snapshot in place.



And the final screenshot shows the information about the snapshot.


Infinality Fonts and Arch Linux

The use of Infinality fonts within Arch linux will greatly improve font rendering. But as mentioned in a few posts on the interwebs, the developers of the Infinality fonts pretty much fell off the face of the earth and no one has heard for him in quite a while. There is a way to get around this issue from what I have found so far. The following is how I got the Infinality fonts to install as there appeared to be public key issues since this packages in the repository have not been updated for a little while now. I will take you through the process that I found to work.



This part is from AJ Reissig who came up with the initial installation process within Arch:


To improve the fonts in Arch we first need to add some additional fonts. Add the following to the terminal:

sudo pacman -S ttf-bitstream-vera ttf-inconsolata ttf-ubuntu-font-family ttf-dejavu ttf-freefont ttf-linux-libertine ttf-liberation

yaourt -S ttf-ms-fonts ttf-vista-fonts monaco-linux-font ttf-qurancomplex-fonts


–Since monaco-linux-font no longer exists, change it to ttf-monaco


Next we will disable bitmat fonts, which are used as a fallback.

sudo ln -s /etc/fonts/conf.avail/70-no-bitmaps.conf /etc/fonts/conf.d


— I skipped adding the repos to the pacman.conf as the locations no longer exist



[kf4bzt@tim-pc ~]$ yaourt -S freetype2-infinality


Now, more than likely you will see a similar error like below. There is a way to fix it so that you can get on with the install process. :0)


==> Verifying source file signatures with gpg…
freetype-2.7.1.tar.bz2 … FAILED (unknown public key C1A60EACE707FDA5)
freetype-doc-2.7.1.tar.bz2 … FAILED (unknown public key C1A60EACE707FDA5)
ft2demos-2.7.1.tar.bz2 … FAILED (unknown public key C1A60EACE707FDA5)
==> ERROR: One or more PGP signatures could not be verified!
==> ERROR: Makepkg was unable to build freetype2-infinality.



Take the public key that is shown in the error and receive the key into the keyring. The -r switch is the same as –recv-keys which is also Equivalent to the –recv-keys switch in GnuPG.


[kf4bzt@tim-pc ~]$ sudo pacman-key -r C1A60EACE707FDA5
gpg: key C1A60EACE707FDA5: public key “Werner Lemberg <wl@gnu.org>” imported
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 19 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 19 signed: 69 trust: 1-, 0q, 0n, 18m, 0f, 0u
gpg: depth: 2 valid: 69 signed: 7 trust: 69-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2017-09-07
gpg: Total number processed: 1
gpg: imported: 1
==> Updating trust database…
gpg: next trustdb check due at 2017-09-07


Once you have receive the keys in the keyring, go ahead and sign the keys using the -lsign-key switch as shown below.


[kf4bzt@tim-pc ~]$ sudo pacman-key –lsign-key C1A60EACE707FDA5
-> Locally signing key C1A60EACE707FDA5…
==> Updating trust database…
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 20 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 20 signed: 69 trust: 2-, 0q, 0n, 18m, 0f, 0u
gpg: depth: 2 valid: 69 signed: 7 trust: 69-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2017-09-07

Now that you have added the keys successfully, have gpg list the keys and created a trusted key database.


[kf4bzt@tim-pc ~]$ gpg –list-keys

gpg: /home/kf4bzt/.gnupg/trustdb.gpg: trustdb created



Add the following line to the ~/.gnupg/gpg.conf file which tells gpg where your keyring lives.


[kf4bzt@tim-pc ~]$ vim ~/.gnupg/gpg.conf

add “keyring /etc/pacman.d/gnupg/pubring.gpg” to the end of the configuration file



After you have the keys good to go, start the install process once more as shown below.


[kf4bzt@tim-pc ~]$ yaourt -S freetype2-infinality


That is all there is to it. You will now have a working copy of the Infinality fonts on your Arch system giving you better font rendering quality than the stock fonts.