For testing an own OBS instance and for small setups like only packaging some scripts from your administrators into RPMS and creating proper installation sources from them, the ready to use obs-server appliance images are the easiest way. You can download them from http://openbuildservice.org/download/ (http://openbuildservice.org/download/).
To use the OBS for your Linux software development with many packages, projects and users, consider setting up an own installation. Depending on the number of users, projects, and architectures, you can split up the back-end (called partitioning) and have separate hosts for the front-end and the database.
But for most installations it is still OK to run everything but workers one host with enough resources.
For flexibility and if you want some kind of high availability it is recommended to use virtualization for the different components.
Normally for a small to middle installation a setup with everything except workers on one host is sufficient. You should have separate /srv volume for the back-end data, XFS as file system is best choice.
For each scheduler architecture you should add 4 GB RAM and one CPU core. For each build distribution you should add at least 50GB disk space per architecture.
A medium instance with about 50 users can easily run on a machine with 16GB RAM 4 cores and 1 TB storage. The storage of course depend on the size of your projects and how often you have new versions.
For bigger installations you can use separate networks for back-end communication, workers and front-end.
The reference installation on build.opensuse.org with lot of users, distributions runs on a partitioned setup with:
a mysql cluster as database
api-server: 16GB RAM 4 cores 50GB disk
separate binary back-ends (scheduler, dispatcher, reposerver, publisher, warden)
source server 11 GB RAM, 4 cores, 3 TB disk (RAM used mainly for caching)
main back-end: 62 GB RAM (oversized), 16TB disk
lot of workers (see - https://build.opensuse.org/monitor (https://build.opensuse.org/monitor))
For build time and performance the count and performance of available worker hosts more important as the remaining parts.
Simple installation means, all OBS services running on the same machine.
It is very important that you read the README.SETUP file coming with your OBS version and follow the instructions there, because here maybe changes to this version.
Before you start the installation of the OBS, you should make sure that your hosts have the correct fully qualified hostname and DNS is working and can resolve all names.
The back-end hosts all sources and built packages. It also schedules the jobs. You need to install the "obs-server" package for this. You need to check the /usr/lib/obs/server/BSConfig.pm file, but the defaults should be good enough for the simple case.
You can control the different back-end components via systemctl. Basically you can enable/disable the service during booting the system and start/stop/restart it in a running system. For more information, have a look at the systemctl man page (https://www.freedesktop.org/software/systemd/man/systemctl.html#Commands). For example, to restart the repository server use
systemctl restart obsrepserver.service
Component | Service Name | Remarks |
---|---|---|
Source Server |
obssrcserver.service | |
Repository |
Server obsrepserver.service | |
Source |
Services obsservice.service | |
Download |
obsdodup.service |
since 2.7 |
Delta Storage |
obsdeltastore.service |
since 2.7 |
Scheduler |
obsscheduler.service | |
Dispatcher |
obsdispatcher.service | |
Publisher |
obspublisher.service | |
Signer |
obssigner.service | |
Warden |
obswarden.service | |
Cloud upload worker |
obsclouduploadworker.service |
Only needed for cloud upload feature |
Cloud upload server |
obsclouduploadserve.service |
Only needed for cloud upload feature |
The sequence in the table reflects the start sequence, you need to enable the services with
systemctl start <name>
first and then you can start them:
systemctl start obssrcserver.service systemctl start obsrepserver.service systemctl start obsservice.service systemctl start obsdodup.service systemctl start obsdeltastore.service systemctl start obsscheduler.service systemctl start obsdispatcher.service systemctl start obspublisher.service systemctl start obssigner.service systemctl start obswarden.service systemctl start obsclouduploadworker.service systemctl start obsclouduploadserver.service
The commands start services which are accessible from the outside. Do not do this on a system connected to an untrusted network or make sure to block the ports with a firewall.
In order to setup the Cloud Upload feature you will need to configure the tools required per each cloud provider. Right now we only support the AWS Amazon Cloud (https://aws.amazon.com) and Microsoft Azure (https://portal.azure.com) as providers.
Before you can start uploading images to the Amazon Web Services (AWS) you have to:
Install the obs-cloud-uploader package
zypper in obs-cloud-uploader
Start the cloud upload services
rcobsclouduploadworker start rcobsclouduploadserver start
At last you have to register the cloud uploader service in /usr/lib/obs/server/BSConfig.pm, eg. by adding following line:
our $clouduploadserver = "http://$hostname:5452";
Read more about configuring the backend in Section 1.4, “Distributed Setup”.
Ensure that the system time of your cloud uploader instance is correct. AWS is relying on the timestamps of the requests it receives. Having an incorrect system time will cause cloud uploads to fail.
We are going to use the role based authentication provided by Amazon to enable the OBS instance to upload images to other user's accounts.
The users will obtain an external ID (automatically created and unique) and the OBS account ID to create an Identity and Access Management (IAM) role. After the user created the role, he needs to provide the Amazon Resource Name (ARN) of the role to OBS. OBS will use this ARN to obtain temporary credentials, therefore an uploader account is necessary which we need to configure (see AWS authentication credentials setup). OBS will use the ARN to obtain temporary credentials for the users account to upload the appliance. The ARN and the external ID are not considered as a secret.
The whole workflow is described in the AWS documentation (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html).
For uploading images to AWS, OBS is using the AWS CLI (https://aws.amazon.com/cli) tool. Before you can start uploading your images, you have to enter the AWS credentials to the /etc/obs/cloudupload/.aws/credentials configuration file. These credentials will then be used by OBS to retrieve the temporary credentials from the ARN provided by users. More information about IAM role base authorization can be found in the Amazon documentation (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html)).
The authentication is done via Microsoft's Active Directory. The user has to create a new application and needs to provide those two credentials to OBS:
Application ID
The Application ID is a unique ID that represents an Active Directory Application.
Application Key
The Application Key can be generated for every application and is the password.
OBS communicates with the REST API of Microsoft Azure to authenticate and upload images.
The Application ID and the Application Key will be stored encrypted in the database. As for that, it's required to generate an SSL secret and public key that has to be stored on the server where the obs-cloud-uploader package has been installed.
To generate that SSL certificate, execute the following commands:
cd /etc/obs/cloudupload openssl genrsa -out secret.pem openssl rsa -in secret.pem -out _pubkey -outform PEM -pubout
It's important that the public key is named _pubkey and the secret key is named secret.pem and are kept in /etc/obs/cloudupload.
You need to install the "obs-api" package for this and a MySQL server.
Make sure that the mysql server is started on every system reboot (use "insserv mysql" for permanent start). You should run mysql_secure_installation and follow the instructions.
Create the empty production databases:
# mysql -u root -p mysql> create database api_production; mysql> quit
Use a separate MySQL user (for example,
obs
) for the OBS access:
# mysql -u root -p mysql> create user 'obs'@'%' identified by 'TopSecretPassword'; mysql> create user 'obs'@'localhost' identified by 'TopSecretPassword'; mysql> GRANT all privileges ON api_production.* TO 'obs'@'%', 'obs'@'localhost'; mysql> FLUSH PRIVILEGES; mysql> quit
Configure your MySQL user and password in the "production" section of the api config: /srv/www/obs/api/config/database.yml
Example:
# MySQL (default setup). Versions 4.1 and 5.0 are recommended. # # Get the fast C bindings: # gem install mysql # (on OS X: gem install mysql -- --include=/usr/local/lib) # And be sure to use new-style password hashing: # http://dev.mysql.com/doc/refman/5.0/en/old-client.html production: adapter: mysql2 database: api_production username: obs password: TopSecretPassword encoding: utf8 timeout: 15 pool: 30
Now populate the database
cd /srv/www/obs/api/ sudo RAILS_ENV="production" rake db:setup sudo RAILS_ENV="production" rake writeconfiguration sudo chown -R wwwrun.www log tmp
Now you are done with the database setup.
Now we need to configure the Web server. By default, you can reach the familiar web user interface and also api both on port 443 speaking https. Repositories can be accessed via http on port 82 (once some packages are built). An overview page about your OBS instance can be found behind 'http://localhost'.
The obs-api package comes with an Apache vhost file, which does not need to get modified when you stay with these defaults: /etc/apache2/vhosts.d/obs.conf
Install the required packages via
zypper in obs-api apache2 apache2-mod_xforward rubygem-passenger-apache2 memcached
Add the following Apache modules in /etc/sysconfig/apache2
:
APACHE_MODULES="... passenger rewrite proxy proxy_http xforward headers socache_shmcb"
Enable SSL in /etc/sysconfig/apache2 via
APACHE_SERVER_FLAGS="SSL"
For production systems you should order official SSL certificates. For testing follow the instructions to create a self signed SSL certificate:
mkdir /srv/obs/certs openssl genrsa -out /srv/obs/certs/server.key 1024 openssl req -new -key /srv/obs/certs/server.key \ -out /srv/obs/certs/server.csr openssl x509 -req -days 365 -in /srv/obs/certs/server.csr \ -signkey /srv/obs/certs/server.key -out /srv/obs/certs/server.crt cat /srv/obs/certs/server.key /srv/obs/certs/server.crt \ > /srv/obs/certs/server.pem
To allow the usage of https API in Web UI code you need to trust your certificate as well:
cp /srv/obs/certs/server.pem /etc/ssl/certs/ c_rehash /etc/ssl/certs/
Check and edit /srv/www/obs/api/config/options.yml
If you change the hostnames/ips of the api, you need to adjust frontend_host accordingly. If you want to use LDAP, you need to change the LDAP settings as well. Look at the Section 3.7, “Managing Users and Groups” for details. You will find examples and more details in the Section 2.1, “Configuration Files”.
It is recommended to enable
use_xforward: true
as well here.
Afterwards you can start the OBS web api and make it permanent via
systemctl enable apache2 systemctl start apache2 systemctl enable obsapidelayed.service systemctl start obsapidelayed.service systemctl enable memcached.service systemctl start memcached.service
Now you have you own empty instance running and you can do some online configuration steps.
To customize the OBS instance you may need to configure some settings via the OBS API and Web user interface.
First you should change the password of the Admin account, for this you need first login as user Admin in the Web UI with the default password "opensuse". Click on the Admin link (right top of the page), here you can change the password.
After changing the Admin password, set up osc
to use the
Admin account for more changes. Here an example:
osc -c ~/.obsadmin_osc.rc -A https://api.testobs.org
Follow the instructions on the terminal.
The password is stored in clear text in this file by default, so you need to give this file restrictive access rights, only read/write access for your user should be allowed. osc allows to store the password in other ways (in keyrings for example), refer to the osc documentation for this.
Now you can check out the main configuration of the OBS:
osc -c ~/.obsadmin_osc.rc api /configuration >/tmp/obs.config cat /tmp/obs.config <configuration> <title>Open Build Service</title> <description> <p class="description"> The <a href="http://openbuildservice.org"> Open Build Service (OBS)</a> is an open and complete distribution development platform that provides a transparent infrastructure for development of Linux distributions, used by openSUSE, MeeGo and other distributions. Supporting also Fedora, Debian, Ubuntu, RedHat and other Linux distributions. </p> <p class="description"> The OBS is developed under the umbrella of the <a href="http://www.opensuse.org">openSUSE project< /a>. Please find further informations on the < a href="http://wiki.opensuse.org/openSUSE:Build_Service">openSUSE Project wiki pages</a>. </p> <p class="description"> The Open Build Service developer team is greeting you. In case you use your OBS productive in your facility, please do us a favor and add yourself at < a href="http://wiki.opensuse.org/openSUSE:Build_Service_installations"> this wiki page</a>. Have fun and fast build times! </p> </description> <name>private</name> <download_on_demand>on</download_on_demand> <enforce_project_keys>off</enforce_project_keys> <anonymous>on</anonymous> <registration>allow</registration> <default_access_disabled>off</default_access_disabled> <allow_user_to_create_home_project>on</allow_user_to_create_home_project> <disallow_group_creation>off</disallow_group_creation> <change_password>on</change_password> <hide_private_options>off</hide_private_options> <gravatar>on</gravatar> <cleanup_empty_projects>on</cleanup_empty_projects> <disable_publish_for_branches>on</disable_publish_for_branches> <admin_email>unconfigured@openbuildservice.org</admin_email> <unlisted_projects_filter>^home:.+</unlisted_projects_filter> <unlisted_projects_filter_description>home projects</unlisted_projects_filter_description> <schedulers> <arch>armv7l</arch> <arch>i586</arch> <arch>x86_64</arch> </schedulers> </configuration>
unlisted_projects_filter only admit Regular Expression (see RLIKE specifications of MySQL/MariaDB for more information) and unlisted_projects_filter_description is part of the link shown in the project list for filtering
You should edit this file according to your preferences, then sent it back to the server:
osc -c ~/.obsadmin_osc.rc api /configuration -T /tmp/obs.config
If you want to use an interconnect to another OBS instance to reuse the build targets you can do this as Admin via the Web UI or create a project with a remoteurl tag (see Section 2.4.2, “Project Metadata”)
<project name="openSUSE.org"> <title>openSUSE.org Project</title> <description> This project refers to projects hosted on the Build Service [...] Use openSUSE.org:openSUSE:12.3 for example to build against the openSUSE:12.3 project as specified on the opensuse.org Build Service. </description> <remoteurl>https://api.opensuse.org/public</remoteurl> </project>
You can create the project using a file with the above content with osc like this:
osc -c ~/.obsadmin_osc.rc meta prj openSUSE.org -F /tmp/openSUSE.org.meta
You also can import binary distribution, see Section 3.2.2, “Importing Distributions” for this.
The OBS has a list of available distributions used for build. This list is displayed to user, if they are adding repositories to their projects. This list can be managed via the API path /distributions
osc -c ~/.obsadmin_osc.rc api /distributions > /tmp/distributions.xml
Example distributions.xml file:
<distributions> <distribution vendor="SUSE" version="SLE-12-SP1" id="137"> <name>SLE-12-SP1</name> <project>SUSE:SLE-12-SP1</project> <reponame>SLE-12-SP1</reponame> <repository>standard</repository> <link>http://www.suse.com/</link> <icon url="https://static.opensuse.org/distributions/logos/suse-SLE-12-8.png" width="8" height="8"/> <icon url="https://static.opensuse.org/distributions/logos/suse-SLE-12-16.png" width="16" height="16"/> <architecture>x86_64</architecture> </distribution> </distributions>
You can add your own distributions here and update the list on the server:
osc -c ~/.obsadmin_osc.rc api /distributions -T /tmp/distributions.xml
To not burden your OBS back-end daemons with the unpredictable load package builds can produce (think someone builds a monstrous package like LibreOffice) you should not run OBS workers on the same host as the rest of the back-end daemons.
You back-end need to be configured to use the correct hostnames for the repo and source server and the ports need to be reachable by the workers. Also, the IP addresses of the workers need to be allowed to connect the services. (look at the /usr/lib/obs/server/BSConfig.pm::ipaccess array).
You can deploy workers quite simply using the worker appliance. Or install a minimum system plus the obs-worker package on the hardware.
Edit the /etc/sysconfig/obs-server file, at least OBS_SRC_SERVER, OBS_REPO_SERVERS and OBS_WORKER_INSTANCES need to be set. More details in the Section 2.1, “Configuration Files”.
start the worker:
systemctl enable obsworker systemctl start obsworker
All OBS back-end daemons can also be started on individual machines in your network. Also, the front-end Web server and the MySQL server can run on different machines. Especially for large scale OBS installations this is the recommended setup.
A setup with partitioning is very similar to the steps of the simple setup. Here we are only mention the differences to the simple setup.
You need to make sure that the different machines can communicate via the network, it is very recommended to use a separate network for this to isolate it from the public part.
On all back-end hosts you need to install the obs-server package. On the front-end host you need to install the obs-api package.
Only one source server instance can be exist on a single OBS installation.
The binary back-end can be split on project level, this is called partitioning.
On one partition following services needs to be configured and run:
repserver
schedulers
dispatcher
warden
publisher
You do not need to share any directories on File System level between the partitions.
Here some example for partitioning:
A main partition for everything not in the others (host mainbackend)
A home partition for all home projects of the users (host homebackend)
A release partition for released software projects (host releasebackend)
The configuration is done in the back-end config file /usr/lib/obs/server/BSConfig.pm. Most parts of the file can be shared between the back-ends.
Here the important parts of the mainbackend of out testobs.org installation:
[...] my $hostname = Net::Domain::hostfqdn() || 'localhost'; # IP corresponding to hostname (only used for $ipaccess); fallback to localhost since inet_aton may fail to resolve at shutdown. my $ip = quotemeta inet_ntoa(inet_aton($hostname) || inet_aton("localhost")); my $frontend = 'api.testobs.org'; # FQDN of the Web UI/API server if it's not $hostname # If defined, restrict access to the backend servers (bs_repserver, bs_srcserver, bs_service) our $ipaccess = { '127\..*' => 'rw', # only the localhost can write to the backend "^$ip" => 'rw', # Permit IP of FQDN "10.20.1.100" => 'rw', # Permit IP of srcsrv.testobs.org "10.20.1.101" => 'rw', # Permit IP of mainbackend.testobs.org "10.20.1.102" => 'rw', # Permit IP of homebackend.testobs.org "10.20.1.103" => 'rw', # Permit IP of releasebackend.testobs.org '10.20.2.*' => 'worker', # build results can be delivered from any client in the network }; # IP of the Web UI/API Server (only used for $ipaccess) if ($frontend) { my $frontendip = quotemeta inet_ntoa(inet_aton($frontend) || inet_aton("localhost")); $ipaccess->{$frontendip} = 'rw' ; # in dotted.quad format } # also change the SLP reg files in /etc/slp.reg.d/ when you touch hostname or port our $srcserver = "http://srcsrv.testobs.org:5352"; our $reposerver = "http://mainbackend.testobs.org:5252"; our $serviceserver = "http://service.testobs.org:5152"; # Needed if you want to use the cloud upload feature our $clouduploadserver = "http://$hostname:5452"; # our @reposervers = (" http://mainbackend.testobs.org:5252, http://homebackend.testobs.org:5252, http://releasebackend.testobs.org:5252 "); # you can use different ports for worker connections our $workersrcserver = "http://w-srcsrv.testobs.org:5353"; our $workerreposerver = "http://w-mainbackend.testobs.org:5253"; [...] our $partition = 'main'; # # this defines how the projects are split. All home: projects are hosted # on an own server in this example. Order is important. our $partitioning = [ 'home:' => 'home', 'release' => 'release' '.*' => 'main', ]; our $partitionservers = { 'home' => 'http://homebackend.testobs.org:5252', 'release' => 'http://releasebackend.testobs.org:5252', 'main' => 'http://mainbackend.testobs.org:5252', }; [...]
On the other partition server you need to change "our $reposerver", "our $workerreposerver" and "our $partition".
On all partition servers you need to start:
systemctl start obsrepserver.service systemctl start obsscheduler.service systemctl start obsdispatcher.service systemctl start obspublisher.service systemctl start obswarden.service
On the worker machines you should set of repo servers in the OBS_REPO_SERVERS variable. You can also define workers with a subset of the repo servers to prioritize partitions.
In this chapter you will find some general monitoring instructions for the Open Build Service. All examples are based on Nagios plugins, but the information provided should be easily adaptable for other monitoring solutions.
This check will output a critical if the HTTP server with ip address 172.19.19.19 (-I 172.19.19.19) listening on port 80 (-p 80) does not answer and output a warning if the HTTP return code is not 200. The server name that will be used is server (-H server) which is important if different virtual hosts are listening on the same port.
check_http -H server -I 172.19.19.19 -p 80 -u http://server
The same check, but this time it will check a ssl enabled HTTP server.
check_http -S -H server -I 172.19.19.19 -p 443 -u https://server
It is also possible to check the presence of a certain string in the HTTP response. In this case it will check for the string Source Service Server.
check_http -s "Source Service Server" -S -H server -I 172.19.19.19 -p 5152
Open Build Service HTTP endpoints that should be checked:
Web Interface / API: port 443
Repository Server: port 82
Package Repository Server: port 5252
Source Repository Server: port 5352
Source Service Server: port 5152
Cloud Upload Server: port 5452
This is a list of common checks that should be run on each individual server.
This check will output a warning if less than 10 percent disk space is available (-w 10) and output a critical if less than 5 percent disk space are available (-c 5). It will check all file systems except file systems with type none (-x none).
check_disk -w 10 -c 5 -x none
This check will output a warning if less than 10 percent memory is available (-w 10) and output a critical if less than 5 percent memory is available (-c 5). OS caches will be counted as free memory (-C) and it will check the available memory (-f). check_mem.pl is not a standard Nagios plugin and can be downloaded at https://exchange.nagios.org/ (https://exchange.nagios.org/).
check_mem.pl -f -C -w 10 -c 5
This check will compare the local time with the time provided by the NTP server pool.ntp.org (-H pool.ntp.org). It will output a warning if the time differs by 0.5 seconds (-w 0.5) and output a critical if the time differs by 1 seconds (-c 1).
check_ntp_time -H pool.ntp.org -w 0.5 -c 1
This plugin checks if the server responds to a ping request and it will output a warning if the respond time exceeds 200ms or 30 percent package loss (-w 200.0,30%) and output a critical if the respond time exceeds 500ms or 60 percent package loss.
check_icmp -H server -w 200.0,30% -c 500.0,60%
This check will output a warning if the load value exceeded 7.0 in the last minute, 6.0 in the last 5 minutes or 5.0 in the last 15 minutes (-w 7.0,6.0,5.0). It will output a critical if the load value exceeded 12.0 in the last minute, 8.0 in the last 5 minutes or 6.0 in the last 15 minutes (-c 12.0,8.0,6.0).
check_load -w 7.0,6.0,5.0 -c 12.0,8.0,6.0
This check is only relevant on physical systems with local storage attached to it. It will check the disk status utilizing the S.M.A.R.T interface and it will output a critical if any of the S.M.A.R.T values exceeds critical limits. check_smartmon is not a standard Nagios plugin and can be downloaded at https://exchange.nagios.org/ (https://exchange.nagios.org/).
check_smartmon --drive /dev/sda --drive /dev/sdb
This check will check that the MySQL database server is running and that the database api_production is available.
check_mysql -H localhost -u nagios -p xxxxxx -d api_production
MySQL Databases to check:
api_production
mysql
It is always advisable to check that the last backup run was successful and a recent backup is available. The check itself depends on the Backup solution that is used.