Mostrando postagens com marcador debian. Mostrar todas as postagens

Helpful tips about debian packages

I have worked with debian packages for a few years, for the company I work at uses this method to deploy the software we do.

The official documentation is very detailed but it is difficult to find the things we want.

I will give here a few tips that I have learnt in the past years that often helps me a lot.

The folder where the package files (like preinst, postinst, etc) are:


fabio@perrella:~ $ ll /var/lib/dpkg/info/apache2.*
-rw-r--r-- 1 root root  6065 Jan 14 15:46 /var/lib/dpkg/info/apache2.conffiles
-rw-r--r-- 1 root root  7953 Jan 28 20:52 /var/lib/dpkg/info/apache2.list
-rw-r--r-- 1 root root  1464 Jan 14 15:47 /var/lib/dpkg/info/apache2.md5sums
-rwxr-xr-x 1 root root 13714 Jan 14 15:46 /var/lib/dpkg/info/apache2.postinst*
-rwxr-xr-x 1 root root  3905 Jan 14 15:46 /var/lib/dpkg/info/apache2.postrm*
-rwxr-xr-x 1 root root  4602 Jan 14 15:46 /var/lib/dpkg/info/apache2.preinst*
-rwxr-xr-x 1 root root   229 Jan 14 15:46 /var/lib/dpkg/info/apache2.prerm*

The folder where the last installed packages are:


/var/cache/apt/archives

How to configure the package to restart the service automatically when it is installed:


in debian/rules:

dh_installinit --restart-after-upgrade

This will replace the tag #DEBHELPER# in the file debian/postinst by a code to call the service restart.

Conffiles:


Put all configuration files into the file debian/conffiles, this will prevent the package to replace the server configuration files while installing a new version of the package. By doing this, it is not necessary to remove the configuration files from the package or add them as .sample files to the repository.

Do you have any useful tips? Please add a comment below!

How to investigate errors too many open files

Last week, at my job, we were trying to find the root problem that was killing one of our applications. It is a rails app running on Debian, and we had some clues about the problem:
- looking at New Relic errors, we saw many errors like "getaddrinfo: Name or service not known"
- looking at Unicorn logs, there were a lot of "too many open files" errors

It seemed the application was being killed by these errors.

We thought that the server may have some network problems, as this could explain the "Name or service not known" error which is an error that happens when some domain can't be resolved with DNS.

But after some research, we remembered this kind of "too many open files" error is related with the number of open files in the filesystem. It is possible to list all opened files using the command lsof. We ran this command, but the number was too little. It didn't look like the default limit of 1024 was being reached.

So we kept searching for some answer to our problem, and we found another way to list the current open files of a process using a command like this:

ls -la /proc/3591/fd

This command shows all the file descriptors related to a process (pid).

We ran this command to our process, and we noticed many file descriptors were not being listed due to permission constraints.

When we ran the same command with root, all the FDs were there listed. So we thought 'Lets try to run lsof with root to see if the result is different', and it was - a big number had appeared!

Running lsof, it is possible to filter the output using grep, so we could analyse why there were so many opened files.

After some analysis, and using a little of bash script, we ended up with this command:

lsof | grep -e "^ruby" | awk '{print $9}' | grep imap | wc

This showed us a lot of opened IMAP connections.

The app uses IMAP to get some information about the mailboxes, and the connections were not being closed after it, so the problem was found!

So the lesson is, when investigating errors like "too many open files", run lsof with root!

Here some links I found to understand it better:

http://www.commandlinefu.com/commands/view/9893/find-ulimit-values-of-currently-running-process
http://geekswing.com/geek/quickie-tutorial-ulimit-soft-limits-hard-limits-soft-stack-hard-stack/

PostgreSql migrando da versão 9.2 para a 9.3

No Debian/Ubuntu existe um jeito simples de fazer um upgrade da versão 9.2 para a versão 9.3 do PostgreSql com os comando pg_upgradecluster (http://manpages.ubuntu.com/manpages/jaunty/man8/pg_upgradecluster.8.html).

Basta seguir o procedimento abaixo:

sudo apt-get install postgresql-9.3
sudo /etc/init.d/postgresql stop
sudo pg_dropcluster --stop 9.3 main
sudo pg_upgradecluster 9.2 main

Após isso, é possivel rodar o comando pg_lsclusters para verificar que a versão 9.3 está ok.


Protegendo linhas do crontab com FLOCK

Em algumas situações queremos colocar uma linha no crontab que rode de tempos em tempos, mas queremos garantir de algum modo, que esta não ira rodar em paralelo caso a execução anterior não tenha sido finalizada, por exemplo:

*/1 * * * * root /sbin/exemplo/processa_relatorios.asp -t diario

Se as 18:23 o comando processa_relatorios.asp começar e as 18:24 não tiver terminado, teremos 2 execuções em paralelo deste, que pode gerar resultados monstruosos.

Várias pessoas (inclusive eu) acabam implementando algum mecanismo de lock na aplicação (ou no comando) para que ele consiga saber que já existe outra instância rodando e que não poderá rodar novamente caso o crontab dispare, mas descobri que existe o comando FLOCK, disponivel no Debian e em algumas outras versões de linux, que pode resolver isso com mais facilidade!

A linha de cron para esse exemplo ficaria assim:

*/1 * * * * root flock -w 0 /tmp/lock_relatorios  -c "/sbin/exemplo/processa_relatorios.asp -t diario"

onde:
"-w 0" indica que deve esperar 0 segundos caso o comando esteja em execução (isto é, vai desistir de rodar o comando pois não há tempo de espera)
"-c" é o comando a ser executado
"/tmp/lock_relatorios" é o arquivo que será lockado para a execução desse comando

Outro exemplo:

- arquivo /etc/cron.d/teste

*/1 * * * * root flock -w 0 /tmp/test -c "date && sleep 70" >> /tmp/log.txt

- a saida do log será:


Mon Dec 19 09:36:01 BRST 2011
Mon Dec 19 09:38:01 BRST 2011
Mon Dec 19 09:40:01 BRST 2011

Nesse caso podemos ver que não rodou a cada minuto pois havia um "sleep 70" que estava segurando a execução do comando.

Ubuntu Ruby 1.9.2 bug no YAML


Pegamos um bug do Yaml no ruby 1.9.2 (no Ubuntu) que não pegava valores de configuração específica de um enviroment.
*obs: esse problema não ocorre no Mac

Por exemplo, com essa conf:

defaults: &defaults
  default_domain: 'defakto.com.br'
development:
  servers: ['server1.defakto.com.br']
  <<: *defaults
test:
  servers: ['server2.defakto.com.br', 'server3.defakto.com.br']
  <<: *defaults

Com essa conf, no env "test" a config "servers" retornava nil

O problema está mais ou menos explicado neste link  e neste outro link tem a solução que é basicamente colocar a linha abaixo no application.rb:

YAML::ENGINE.yamler = 'syck'

vlw!

Problemas com PHP GetText no Debian

Dica de como fazer funcionar o php gettext em um servidor Debian

No Ubuntu, conseguimos fazer funcionar seguindo este Tutorial

No Debian, podemos seguir este mesmo, mas é preciso 1 passo a mais:

descomentar a linha 'es_AR' no arquivo /etc/locale.gen (no meu caso estou traduzindo para o es_AR)
- rodar o comando locale-gen




Dessa maneira funcionou!