System statistics at a glance: Official site.
Artillery 0.1 alphaÂ – New tool for Linux Protection by ReL1K
A new Tool “Artillery” – for Linux Protection has been Released by ReL1K (Founder DerbyCon, Creator of the Social-Engineer Toolkit).Â Itâ€™s written in Python and completely open-source.Â Artillery is a combination of a honeypot, file monitoring and integrity, alerting, and brute force prevention tool. Itâ€™s extremely light weight, has
Last week we looked at why Linux deserves some consideration when choosing an operating system for your digital recording studio. But even the worthiest operating system is useless without useable apps.
Fortunately, there is a long list of excellent music applications available for Linux. If you choose one of the Linux distributions recommended last week, many of them come preinstalled.
This article was previously published on the AudioJungle blog, which has moved on to a new format in 2010. Weâ€™ll be bringing you an article from the AudioJungle archives each week.
Weâ€™ll leave out the programs not directly about making music â€“ programs like guitar tuners, streaming systems, notation software and guitar tab apps â€“ but we will look at some of the plug-ins and effects systems that are available. And weâ€™ll leave out the applications that have better alternatives. My original list had over 50 programs.
Most of the programs are available free of charge, and in general are of higher quality than many free audio apps for Windows. So without further ado, here are 29 music making applications for Linux.
Ardour is â€œthe new digital audio workstationâ€. It aims to be a professional DAW, and offers features like â€œmultichannel recording, non-destructive editing with unlimited undo/redo, full automation support, a powerful mixer, unlimited tracks/busses/plugins, timecode synchronization, and hardware control from surfaces like the Mackie Control Universal.â€
Jokosher is a simpler multi-track recorder, designed for guitarists, not engineers. It â€œprovides a complete application for recording, editing, mixing and exporting audio, and has been specifically designed with usability in mind.â€ Itâ€™s perfect for musicians who want to record their music without spending all of their time learning how the program works.
Sweep is an audio editor and live playback tool. It aims to be easy to use, support many codecs and audio formats, and support LADSPA effects plug-ins (see below).
ReZound is a stable, graphical audio editor.
5. Traverso DAW
Traverso DAW is a multitrack recording suite that is cross-platform. Besides Linux, it also works on Windows and Mac OS X. It claims to have a unique interface, a unique approach, and cover all tasks from recording to mastering.
6. Amuc (The Amsterdam Music Composer)
Amuc is an application for composing and playing music. You enter tune fragments graphically, or import from MIDI files. The program includes 5 different built-in instruments, 6 mono synthesizers, and sampled instruments.
7. LMMS (Linux Multimedia Studio)
Similar to FL Studio, LMMS allows you to produce music with your computer. Features include â€œthe creation of melodies and beats, the synthesis and mixing of sounds, and arranging of samples. You can have fun with your MIDI-keyboard and much more; all in a user-friendly and modern interface.â€
Audacity is a well-known and much-loved cross-platform sound editor.
Rosegarden is an easy-to-learn audio and MIDI sequencer, score editor, and general-purpose music composition and editing environment.
MusE is a MIDI/Audio sequencer with recording and editing capabilities. It aims to be a complete multitrack virtual studio with support for MIDI and audio sequencing with real-time effects.
Qtractor is an Audio/MIDI multi-track sequencer application aiming evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.
Seq24 is a minimal loop based midi sequencer. It was created to provide a very simple interface for editing and playing midi â€˜loopsâ€™, and excludes the bloated features of the large software sequencers, and includes a small subset of features that I have found usable in performing.
Renoise has a unique bottom-up approach to music making. With its vertical timeline and streamlined interface, Renoise lets you have direct control over the composition. Features include automatic plug-in delay compensation, high resolution timing, fast interface, cross-platform (Linux, Mac OSX and Windows), plug-in support, and low-latency audio.
TiMidity++ is a software synthesizer, playing MIDI files by converting them into PCM waveform data. It can also convert MIDI files into various audio formats.
amSynth stands for Analogue Modeling SYNTHesizer. It provides virtual analogue synthesis in the style of the classic Moog Minimoog/Roland Junos. It offers an easy-to-use interface and synth engine, while still creating varied sounds.
16. Bristol Audio Synthesiser
Bristol Audio Synthesiser is an emulator for diverse keyboard instruments. Currently about 20 are implemented: various Moog, Sequencial Circuits, Oberheim, Yamaha, Roland, Hammond, Korg, ARP, and Vox algorithms. The application consists of an audio engine and an associated graphical user interface called Brighton which acts as a dedicated master keyboard for each emulation.
terminatorX is a real-time audio synthesizer that allows you to â€œscratchâ€ on digitally sampled audio data the way hip-hop DJs scratch on vinyl records. It features multiple turntables, real-time effects (built-in as well as LADSPA plugin effects), and a sequencer and MIDI interface.
Qsynth is a GUI front-end for FluidSynth. FluidSynth is a software synthesiser based on the Soundfont specification.
ZynAddSubFX is a open source software synthesizer capable of making a countless number of instruments.
20. LAoE (Layer Based Audio Editor)
LAoE stands for Layer-based Audio Editor, and it is a rich featured graphical audio sample-editor, based on multi-layers, floating-point samples, volume-masks, variable selection-intensity, and many plugins suitable to manipulate sound, such as filtering, retouching, resampling, graphical spectrogram editing by brushes and rectangles, sample-curve editing by freehand-pen and spline and other interpolation curves, effects like reverb, echo, compress, expand, pitch-shift, time-stretch, and much more.
The LinuxSampler project was founded with the goal to produce a free, streaming capable open source pure software audio sampler with professional grade features, comparable to both hardware and commercial Windows/Mac software samplers and to introduce new features not yet available by any other sampler in the world. It is very modular, and usually runs as its own process in the background of the computer.
SooperLooper is a live looping sampler capable of immediate loop recording, overdubbing, multiplying, reversing and more. It allows for multiple simultaneous multi-channel loops limited only by your computerâ€™s available memory. SooperLooper is also available for Mac OS X.
23. Cheese Tracker
CheeseTracker is a software sampler and step-based sequencer. It allows a musician to turn single-note samples into instruments capable of covering three or four octaves (by playing the samples at different speeds, resulting in different pitches). In addition, it is possible to take a collection of samples that are recorded at different octaves, and combine them into a single â€œinstrument,â€ allowing for even more octaves without sampling artifacts.
Hydrogen is an advanced drum machine for GNU/Linux. Itâ€™s main goal is to bring professional yet simple and intuitive pattern-based drum programming.
Breakage is an intelligent drum machine designed to make it easy and fun to play complex, live breakbeat performances. A step-sequencer pattern editor and previewer, database, sample browser, neural network, pattern morphs, statistics and probabilistic pattern generator give you the tools to work with breaks. Breakage is also available for Mac OS X and Windows.
JAMin is the JACK Audio Connection Kit (JACK) Audio Mastering interface. JAMin is an open source application designed to perform professional audio mastering of stereo input streams. It uses LADSPA (see below) for digital signal processing (DSP). It features linear filters, 30 band graphic EQ, 1023 band hand drawn EQ with parametric controls, spectrum analyser, 3 band peak compressor, multiband stereo processing, and a loudness maximiser.
27. LADSPA effects and plug-ins
LADSPA is the Linux Audio Developerâ€™s Simple Plugin API. It is a standard that allows software audio processors and effects to be plugged into a wide range of audio synthesis and recording packages.
Steve Harris lists quite a few LADSPA plug-ins on his website.
DSSI (pronounced â€œdizzyâ€) is an API for audio processing plugins, particularly useful for software synthesis plugins with user interfaces. DSSI is an open and well-documented specification developed for use in Linux audio applications, although portable to other platforms. It may be thought of as LADSPA-for-instruments, or something comparable to VSTi.
29. LV2 Audio Plugin Standard
LV2 is a standard for plugins and matching host applications, mainly targeted at audio processing and generation. It is a successor of LADSPA, intended to address the limitations of LADSPA which many applications have outgrown.
This article was first published over a year ago on the AudioJungle blog. Has anything changed in Linux audio since then? Let us know in the comments.
It seems everywhere I go to work, I face the same operational problems. Once again, I must find a way to centralize logs and provide different levels of access to said logs. Sadly, the syslog protocol is getting quite aged, and it’s just not enough anymore. It works well if you have only a few machines, and only need to provide access to sysadmins. But when developers and other types of users are thrown into the mix, you need a more granular system.
Also, support for the syslog protocol varies greatly from daemons to daemons. One major culprit for me as always been Apache (and web servers in general) because it only supports out of the box syslog for error logs. For access logs, you can use different techniques, but no matter which one you use, you end end with the same problem: if you have more than one vhost on the machine, all their logs end up in the same syslog facility. You can obviously filter them after that, but that’s way more work than say email logs.
If you work for a company that has a lot of budget, you may consider getting Splunk. It’s a very good commercial product with a free version also. But last I checked, it was priced at 7000$US per half gig of logs indexed per day. When you have web servers generating several gigs of logs each per day, that could end up being very expensive. Money that could be used to buy hardware to deploy more open source software. Which I tend to prefer.
So after a week or so of investigation, testing and benchmarking, here are my findings. The architecture of the final setup is not settled yet, but it will more or less look like this.
NOTE: Keep in mind, this post won’t go into details about clustering and scaling. But all chosen products can achieve that. You should be able to come up with that on your own easily. I will concentrate on the tools and the workflow.
In order to avoid changing the syslog daemon on each servers, I decide to keep either sysklogd or rsyslog intact. As it’s installed by default. And with a centralized configuration management system, it’s simple to add a new entry to send your logs to another machine. Something like this:
That will take care of most of the systems’ logs and send them to a central machine. But what about Apache? As far as error logs are concerned, it’s simple. You need to reconfigure Apache to send them to syslog. On RedHat-based systems, you want to edit /etc/httpd/conf/httpd.conf and on Debian-based systems (I might be wrong, I don’t have one handy with Apache installed) you’d have to modify /etc/apache2/apache2.conf or something like that.
I use local2 as an example, but you could pick any facility. In any case, I recommend using one of the local* facilities, as later on, it will allow you to create filters and alerts based on that.
Now as I was mentioning earlier, the issue with web servers is that you can have more than one vhost on a machine. And syslog was never designed with that in mind. So you will inevitably end up with all logs for a machine in the same facility. Apache supports pipping logs to an external program. I found that the simplest was to use the loggerÂ tool. So either system-wide or per vhost, you can add something similar to your configuration:
CustomLog "|/bin/logger -p local2.info" combined
During benchmarks, that obviously added some overhead. You will have to plan your site’s capacity with that in mind. But by not much. Maybe 5% more. I think it’s well worth it for the operational advantages.
As far as Java applications are concerned, you can easily configure it to send to syslog using log4j.
2. Tailing log files (warning)
In the past, I sometimes would use tools that would tail log files and then forward them to a central syslog machine. That’s fine if you just need to archive on a file system somewhere. But to use there’s one thing that’s important to know, what you get in aÂ log file, is just a string. That’s not a standard and properly formatted syslog message. Rsyslog is able to log real syslog messages to your files, but I don’t recommend it as it’s harder to read. That said, if you want the webUI at the end of the proposed chain to work properly, you don’t want to do that. It’s better to use something like logger if possible.
Now we leave the legacy world and enter the present time of logging. Logstash is the Swiss Army knife of the logging world. It’s a very well designed application that can be used in either agent or server mode. In agent mode, you can configure different types of inputs and outputs. It supports a wide range of protocols like file, syslog, amqp, etc.
So here, I decided to use it with a syslog input to receive logs from our machines. It’s also easily load-balancable with a layer 3 load-balancer. It will then send the logs to RabbitMQ to an exchange.
Now, this part is optional, you could send the logs straight from Logstash to Graylog2, but I prefer to have a middleman to do a bit of queuing. Also, once the messages enter the exchange in a AMQP server, you can route them to more than one queue. For different types of processing. In order to do that though, you need to use a fanout exchange.
Why RabbitMQ? Well, it’s written in Erlang and it’s very fast. During my benchmarks, it was processing between 4000 and 5000 messages per second during the peaks. Also, it’s easily clusterable in an elastic kind of way. And all operations can be done while the cluster is live. I also recommend you install the management plugins, as that will provide a very nice webUI to manage your stack. Often, UIs of the sort are limited, but in this case, everything that can be done on the CLI is doable with the webUI as well. And it’s very well designed and pleasant to the eye.
So at this point, my messages are entering an exchange named ‘syslog’Â that routes messages to two different queues: graylog and elasticsearch.
NOTE: As I write this, version 2.5.1 as just been released. At this point in time, queues can not be replicated in your cluster. So if you lose the node where the queue was created, you lose the queue. That said, you can query your queue from any node in the cluster. Support for replicated queues should be available soon. You could use DRDB though to cluster only one node. That would give you high-availabilityÂ at the queue level.
5. Logstash (again)
Now, we’re almost ready to give access to our logs to our different users. At this steps, logs are ready to be sent to Graylog2. So we will use another Logstash instance with an AMQP input, that will read messages from our ‘graylog’Â queue and forward them to Graylog2 using a Gelf output. That’s theÂ preferredÂ protocol for importing messages into Graylog2. I won’t provide an example configuration for Logstash, as it’s really easy and straightforward to configure.
This is where all the magic will happen. Graylog2 has two components: a daemon that receives logs, processes them, and inserts them in a capped collection in MongoDB. Now, take a few minutes to go read about that. As it’s important to understand well. Basically, it works like a FIFO. So in order to take advantage of the speed inherent to a FIFO, you want to make sure that your capped collection fits in RAM. MongoDB will allow you to create a capped collection larger than that, but you get major performance degradation when your data reach the amount of available RAM you have.
So with that taken into consideration, on my test machine, I created a roughly 5GB capped collection. With that, I was able to store more than 10 millions messages in it. It’s important to know, that Graylog2 is not meant to be used for archiving. Where it excels is in real-time (or close to it) view of your logs. Also, you can setup different alarms based on facilities, hosts and regex. That will then email you to alert you. Very cool. It allows you to be more proactive. And to detect issues a traditional monitoring system can’t find.
Remember I mentioned two different queues? The reason is simple. Once a message is consumed in RabbitMQ, it’s not available anymore. It’s deleted. So you need more than one queue, if you want to use different systems. As Graylog2 is great for short-term analysis and real-time debugging, you can’t count on it for archiving. Enters Elasticsearch: a clusterable full-text indexer/search engine. It’s based on the Lucene project from the Apache Foundation. Their main goal is to have a very simple to use and configure, elastic search engine. And from my short tests with it, it lives up to it. It discovers new nodes using multicast. So basically, you power up a new node, and the cluster detects it, recalibrateÂ itselfÂ and voila.
That’s where I plan to store my logs for long-time archiving. Logstash (is there anything it can’t do?), when run in server mode, provides a web interface to search them. You would use again an AMQP input and a Elasticsearch output to send them to Elasticsearch. Then run another instance of Logstash in web mode. To provide the webUI.
So that’s it. That’s a home-made Splunk-like system. Obviously, it’s more work to deploy, but it’s much cheaper, more flexible and open source. It will grow as needed by your infrastructure. You can use it to aggregate logs from servers, applications and networkingÂ equipmentÂ easily. And provided granular access to your logs through graylog.