Posts tagged ‘queue’

December 29th, 2014

Automatic Scaling with Chef and Kaltura API

by Jess Portnoy

Consider the following Kaltura cluster, built with Chef and Amazon’s EC2: A Chef server, 1 load balancer, 2 fronts nodes, 2 batch nodes, 2 Sphinx nodes, and a single MySQL DB.
Usually, this cluster layout will do well in handling the average load of a medium sized user-generated video site (or video app). What if all of sudden there are significantly more videos uploaded, how can you avoid downtime due to the increase in traffic?

This post demonstrates how to automatically scale a Kaltura cluster based on system load monitoring using Opscode Chef and Kaltura API.

To simulate the heavy load, we will use Kaltura’s PHP5 Client Library, to call the bulk upload API adding videos to the transcoding queue, and build a Kaltura watchdog script that will run as a cronjob to alert us when the conversion load hits a certain threshold.

As the watchdog alerts on loaded transcoding queue, a Chef knife command (using its EC2 plugin) will launch additional batch instances to handle the load.

Live Demo


What you need


Setting up

To connect the Chef server to the Kaltura cluster and run the Kaltura watchdog script, install the kaltura-base package. To install kaltura base, SSH to the Chef machine and, as super user:

Note: Doing only this configuration step will not start any unneeded Kaltura daemons or expose the Kaltura web interfaces from the Chef server. The kaltura-base package will only allow our watchdog script to connect the rest of the Kaltura cluster and monitor it via the Kaltura API.

Next, also on the Chef machine, edit: /opt/kaltura/app/tests/monitoring/config.ini

To retrieve the account API secret keys for partner id -4, run from command line with a power user:

To retrieve the account API admin secret key for partner id -1, run from command line with a power user:

To test the watchdog, run from command line with a power user:


The watchdog script

See the watchdog code on GitHub. ( Feel free to fork and submit pull requests! )

Save the code to /usr/local/bin/ and give it the executable permission:

Test the watchdog using bulk upload. From Chef server, run the following:

Run the upload_bulk script a few times to get a conversion queue going.

Normally, you will run the watchdog in crontab, at about 5 min interval. To see it in action, lets run it manually:

Let’s pass very small thresholds to the watchdog to see it working. Pass 1 for warning and 10 for critical. (Naturally, in Production, numbers will be higher.) From command line, run the following command:

This will run the watchdog in an endless loop in the shell we’re at so we can see its output:

As you can see, we successfully launched a new EC2 instance, and applied the nfs and kaltura::batch Chef recipes using chef-client.


What’s next?

To extend this functionality into production mode, run a manager that will:

  • Keep monitoring the transcoding queue using the watchdog
  • Keep a list of new batch servers launched when the load gets high
  • When the load calms down, stops the batch daemon on the new transcoding node, waits 20 minutes to makes sure the load remains low, and terminate the instance

Note: that the same practice can be applied to other cloud infrastructures or VM clusters (such as VMWare) using their respective APIs.

If you build on it, please submit a pull request on the GitHub project.

October 26th, 2011

Content Moderation Workflows

by Roni Cohen

The Kaltura platform provides publishers with lots of great features and abilities. In this post and the next we will dive into some of the moderation features.


Content Moderation

A KalturaBaseEntry object includes moderation status which indicates whether the entry needs to be examined by an administrator or was already decided as an approved or rejected content. An Entry that awaits on the moderation queue is not yet available for public use, thus prevents unauthorized content from being automatically published on your site.


Moderation Screen KMC

Moderation Screen KMC

read more »