Pipelines for Applications

Running the Pipelines installation script

These instructions are specific to Linux.

Before getting started, ensure you have the latest Pipelines installation script. Please contact our team to obtain the Pipelines on premises install script.

Also ensure you have reviewed the AWS requirements for your installation.

Pipelines Installation Script

The following fields should be configured before running the script:

OptionInformation
ENTERPRISE_VERSIONThis value will be supplied by Pipelines.
AWS_ACCESS_KEYIf the install is being run on an EC2 instance in AWS and the instance has already been assigned an appropriate IAM role, then enter the value INSTANCE.
Otherwise, enter the AWS Access Key that has the appropriate role/permissions for Pipelines. This includes access to S3 and/or DynamoDB.
It installing on bare metal, leave this blank.
AWS_SECRET_KEYIf the install is being run on an EC2 instance in AWS and the instance has already been assigned an appropriate IAM role, then enter the value INSTANCE.
Otherwise, enter the AWS Secret Key that has the appropriate role/permissions for Pipelines. This includes access to S3 and/or DynamoDB.
It installing on bare metal, leave this blank.
MYSQL_CREDSIf using MySQL, enter the username=password that Pipelines will use to access the MySQL database.
root=secretpassword
HTTPS_PROXYDeprecated Use S3_PROXY_ENDPOINT and DDB_PROXY_ENDPOINT.
If there exists an HTTPS Proxy in your environment that Pipelines will have to use, enter the full URL for the proxy server.
https://192.168.0.112:8888/
Otherwise, leave the field BLANK.
S3_PROXY_ENDPOINTIf there exists an S3 Proxy in your environment that Pipelines will have to use, enter the full URL for the proxy server.
Otherwise, leave the field BLANK.
S3_ENDPOINTIf using an S3 emulator or Artifactory, this specifies full URL for the endpoint of the service for Pipelines.
http://artifactory.example.com
Otherwise, leave the field BLANK.
S3_PROVIDERIf using S3 or an S3 emulator, set to S3. Otherwise if using Artifactory, set this to ARTIFACTORY
DDB_PROXY_ENDPOINTThe DynamoDB Proxy in your environment that Pipelines will have to use, enter the full URL for the proxy server.</br> Otherwise, leave the field BLANK.
REGIONEnter the AWS Region this Pipelines instance will operate in. This is the same region that the DynamoDB and S3 Bucket are in. us-west-2
STAGEThere are only 3 valid values: beta, gamma, or prod. This choice affects the naming of DynamoDB tables and the DNS names.
DISTELLI_EMAILThis is the initial Pipelines SuperUser email login. Use this email for first login into Pipelines.
DISTELLI_PASSWORDThis is the initial Pipelines SuperUser login password. Use this for first login into Pipelines.
S3_BUCKETEnter the AWS S3 Bucket Name that Pipelines will use for user release artifacts data.
If using Artifactory, this will be the Artifactory Generic Binary Repository.
S3_SUBDIREnter the AWS S3 Bucket subdirectory that Pipelines will use for user release artifacts data.
Only used for S3.
S3_SSEValid options: true or false
Is there S3 Encryption?
web UI_ENDPOINTThis is the URL or IP address and port for the Pipelines web UI. Typically this points to a load balancer.
BACKEND_ENDPOINTThis is the URL or IP address and port for the Pipelines backend service.
AGENT_ENDPOINTThis is the URL or IP address and port for the Pipelines agent service.
DDB_CIPHER_KEYThis is the database cipher key. This is required and must be the same for all Pipelines instances using the DDB. This can be created with the following syntax:
dd bs=1 if=/dev/urandom count=16 2>/dev/null | base64
DDB_TABLE_PREFIXThis is the database table name prefix.
SUDOThis is the tool used to provide an advanced access to system resources. Typically this is sudo.
DISTELLI_TOOLSThe default file location for Pipelines tools. Typically this is /usr/local.
DISTELLI_CONFIGThe default file location for Pipelines configuration files. Typically this is /etc.
DISTELLI_USERThe user that Pipelines uses for local deployments of Pipelines releases. Typically this is distelli.
ROOT_USERThe system root user. Typically this is root.
DATA_DIRThe directory the Pipelines agent will deploy to. Typically this is /disetlli.
CUSTOM_MANAGERUse this option to set any distelli agent install options.
MYSQL_ENDPOINTIf using MySQL, set this to the database endpoint, port, and database name
mysql://localhost:3306/distelliDB

If using SSL with MySQL, specify the certificate also.
mysql://distelli-alpha.cabc012efgh3.us-east-8.rds.amazonaws.com:3306/onprem?useSSL=true&serverSslCert=$DISTELLI_CONFIG/rds-combined-ca-bundle.pem

Installing on MySQL

Pipelines on-premise supports installing on MySQL.

Note: Pipelines must use MySQL 5.7 or later in the 5.x release series.

Pipelines with MySQL Prerequisites

Before beginning the Pipelines install script with MySQL, the database must first be created.

Install Pipelines with MySQL

There are 2 specific variables in the Pipelines install shell script that pertain specificly to installing Pipelines with MySQL.

OptionInformation
MYSQL_CREDSIf using MySQL, enter the username=password that Pipelines will use to access the MySQL database.
root=secretpassword
MYSQL_ENDPOINTIf using MySQL, set this to the database endpoint, port, and database name
mysql://localhost:3306/distelliDB

If using SSL with MySQL, specify the certificate also.
mysql://distelli-alpha.cabc012efgh3.us-east-8.rds.amazonaws.com:3306/onprem?useSSL=true&serverSslCert=$DISTELLI_CONFIG/rds-combined-ca-bundle.pem

Upgrading Pipelines on MySQL

Upgrading Pipelines on MySQL will automatically handle any database migrations, changes, and new index creations.

Obtain the Pipelines on premises Install Script

Please contact our team to obtain the Pipelines on premises install script.

Install MySQL 5.7 on Ubuntu

  1. Get the apt-get configurator.
  2. wget http://dev.mysql.com/get/mysql-apt-config_0.6.0-1_all.deb

  3. Install the apt-get configurator.
  4. sudo dpkg -i mysql-apt-config_0.6.0-1_all.deb

  5. Update apt-get.
  6. sudo apt-get update

  7. Install MySQL 5.7.
  8. sudo apt-get install mysql-server

Create the MySQL Database

  1. Login to MySQL.
  2. mysql -u USER -p

    You will be prompted for the USER password.

  3. Create database.
  4. create database DISTELLI_DB;

  5. Add a user and grant permissions to the database.
  6. grant all privileges on DISTELLI_DB.* to USER identified by "PASSWORD";

  7. Exit MySQL.
  8. exit;

Set up on-premises Docker build images

Shared build servers can be offered as a resource to the users of a Pipelines on-premises install.

When building on a shared build server, in Pipelines, users select a Docker image. Pipelines offers several flavors of images, including:

  • Base - The base image
  • Android - For building Android apps
  • Go - For the Go Language
  • JVM - For Java
  • JavaScript - For JavaScript applications
  • Perl - For Perl
  • PHP - For PHP
  • Python - For Python
  • Ruby - For Ruby

For more technical details on the images, please see Pipelines Build Environment Details.

A hosted on-premises version of Pipelines can offer none, some, or all of the images. Further, after this is configured, users can add their own personal Docker image to build from. For more info see Creating Docker Build Images for Pipelines.

An outline of the steps involved to set this up:

  • Determine a docker registry where the docker build images will be stored.
  • Ensure the on-premises environment is up and working with its specific Pipelines agent.
  • Setup a build server in the D1, default Pipelines, account.
  • Rebuild the existing Pipelines docker build images into the above registry with the on-premises Pipelines agent.
  • Update the Pipelines shared image docker config file.
  • Restart the Pipelines web UI.

Select registry

The Pipelines shared docker build images must sit in a valid docker registry.

Docker Hub is a free service.

Install on-premises

Work with the Pipelines customer success team to:

  • Install your Pipelines stack.
  • Ensure the Pipelines agent is working.

Set up shared build server

Setup and configure a shared build server in the D1, default Pipelines account.

For more information on provisioning a build server in Pipelines see: Using your own Build Server.

This server must have Docker installed also. The Pipelines user must be added to the “docker” group.

Note: After adding the Pipelines user to the docker group, you must restart the distelli supervise process.

sudo distelli supervise stop
sudo distelli supervise start

You do not configure build tools on this server, the Docker containers include the build tools.

Build Docker images

You must now:

  • Copy each Pipelines docker build image. The images are on Docker Hub here:
    • distelli/travis-ruby
    • distelli/travis-base
    • distelli/travis-javascript
    • distelli/travis-jvm
    • distelli/travis-android
    • distelli/travis-erlang
    • distelli/travis-go
    • distelli/travis-haskell
    • distelli/travis-perl
    • distelli/travis-php
    • distelli/travis-python
  • Create a new image (Dockerfile) based on this image.
    • Include the Pipelines agent for this on-premises install.
    • FROM distelli/travis-base RUN wget -qO- https://pipelines.puppet.com/download/client | sh
  • Build the image.
  • Push the image to the on-premises registry.

This process can easily be done from the build server created in the previous step. This should be done from the Pipelines user.

You will need to login the Pipelines user to the on-premises destination registry before beginning.

The below code exemplifies automating the steps outlined above.

DESTINATION_REGISTRY=123456789.dkr.ecr.us-east-1.amazonaws.com
DISTELLI_AGENT_URL=https://www-distelli.example.com/download/client
#The following is login for destination registry
eval "$(aws --region us-west-1 ecr get-login)"
for L in base android go jvm javascript perl php python ruby; do
  docker pull distelli/travis-$L || break
  printf "FROM distelli/travis-$L\nRUN wget -qO- $DISTELLI_AGENT_URL | sh\n" > Dockerfile
  docker build -t $DESTINATION_REGISTRY/distelli-build-$L . || break
  docker push $DESTINATION_REGISTRY/distelli-build-$L || break
done
  • DESTINATION_REGISTRY - The docker url to the registry. If using Docker Hub, it is simply your Docker Hub user name.
  • DISTELLI_AGENT_URL - This URL can be found here:
    1. Servers
    2. Add Server
    3. Add Existing Server

Here you can find the URL to download the Pipelines agent from this on-premises install of Pipelines.

agent url

You must ensure the “Pipelines” user on the build server has access to pull the images from the source and destination registries and push to the destination registry.

Update config file

Update the Pipelines shared image Docker config file. This can be found on the Pipelines server at:

/etc/distelli-config.json

Look for the entry DockerImages in the JSON blob.

"DockerImages" : { }

Adding new entries for shared docker build images is in the format:

"registry/image":"Description"

For Pipelines, this looks like this:

"stage/*/DockerImages": {
    "distelli/travis-base": "Distelli Base", 
    "distelli/travis-jvm": "Distelli Java", 
    "distelli/Javascript": "Distelli Javascript", 
}

Adjust your file settings and save it.

Restart the Pipelines web UI

Finally, restart the Pipelines web UI. This is best done from the Pipelines D1 master account.

Restarting an Application

Trusted servers

The above scenario uses Docker in Docker to build. This is a secure environment where one tenant can not affect another tenants process. That is to say, you can not see other docker builds or images in the host from a build.

You can tell Pipelines that your shared build servers have “Trusted” docker daemons. Which will run docker builds directly on the host instead of Docker in Docker. Realize this means builds will have access to the build server docker processes on the host and can maliciously affect other docker containers running, which includes other Pipelines builds.

Enable trusted servers

You must have Pipelines agent 3.66.25 or greater on your build server.

You must have access to the Pipelines administrator console.

  1. In the console, navigate to the Enterprise tab.
  2. In the Console / Enterprise Settings click the Configuration tab.
  3. Check the [x] Trust local Docker Daemon on Shared Build Servers button.

You have now enabled trusted docker for your Pipelines shared builds.

Add a server

Adding an extra Pipelines instance, for redundancy, is relatively easy. When hosting more than one Pipelines instance, the instances must be behind some form of load-balancer or proxy. This should have been configured on initial bootstrap of Pipelines.

First step is to instantiate a server. The minimum requirements for a Pipelines instance are:

  • 2 cpu
  • 8 GB RAM
  • 50 GB Volume

Of note, Pipelines works best with Ubuntu 14 & 16, but can work with many flavors of Linux.

Realize, if your existing Pipelines instances are using an IAM role (or similar security features) you may need to ensure the new server has the same role(s).

Install agent

On the new server, install the Pipelines agent from your existing Pipelines onpremise installation.,

Auth (login) the agent to the root D1 account of your Pipelines onpremise installation.

Copy configuration files

  1. Login (ssh) to the command prompt of your existing working Pipelines onpremise server.
  2. Copy /etc/distelli-config.json and /etc/distelli-creds.json to the new server.
  3. Ensure the files are owned by the distelli user (created when you installed the agent.) chown distelli /etc/distelli-c*.json
  4. Ensure the files permissions are set appropriate chmod 600 /etc/distelli-c*.json.

Install infrabase

The next step is to install the Pipelines OnPremInfraBase package to the server. This package includes the basics to run Pipelines, including Java 8.

  1. Login (browser) to your onpremise Pipelines web UI with the root D1 account.
  2. Ensure you are on the Pipelines for Applications web UI.
  3. Click OnPremInfraBase application.
  4. Click Environments.
  5. Click the production environment name which should resemble this infrabase-REGION-STAGE.
  6. Click the (+) icon link to add a server to this environment.
  7. Ensure the [x] Deploy app to added servers option is checked (enabled).
  8. In the list, select the new server you just installed the Pipelines agent on, and click Add Servers.
  9. You should be prompted to deploy the active release to this one server now. See an example below.

    Deploy infrabase

    If you are not prompted to deploy, see the Troubleshooting section below.

  10. Click Deploy to deploy the infrabase to the new server.

Install packages

Now you will follow the same procedure, as you did to install Infrabase, for all the other Pipelines services. It is important that these are deployed in this specific order:

  1. AgentService
  2. DeploymentMonitor
  3. DistelliBackendService
  4. Pipelines web UI

The idea is that you will go to each of those Applications, in order, in Pipelines; go to the environment and add the server. This will initiate a deployment of the active release of that Pipelines service to just that server.

Let each one finish successfully before continuing to the next.

Add server to load balancer

Finally, add the new Pipelines instance to any load-balancer or proxy. You should be up and running.

Troubleshooting

Issue: Not prompted to deploy when adding server to environment

This occurs when there have been no physical deploys of the Application, except for the initial bootstrap deploy. This can be resolved by:

  1. Removing all the existing working Pipelines servers from the environment.
  2. Ensure only the new Pipelines server is in the environment.
  3. Initiated a deploy of the active release.
  4. After the deploy successfully completes, add all the existing working Pipelines servers back into the environment.

Warning: Do NOT deploy to the existing working Pipelines servers! This may cause an outage!

Set up log pruner

Pipelines services are good at providing logs. Logs are kept in the DISTELLI_APPHOME directory.

In a default install, these are located at:

/distelli/envs/agent-service-REGION-STAGE/logs
/distelli/envs/backend-service-REGION-STAGE/logs
/distelli/envs/dmon-REGION-STAGE/logs
/distelli/envs/proxy-REGION-STAGE/logs
/distelli/envs/web UI-REGION-STAGE/logs

The REGION and STAGE can be found in the original distelli-install.sh that was used to bootstrap the first Pipelines server.

Clone the repository

The pruner is already created here github.com/Distelli/onprem-log-pruner. You can simply clone that repository and add it to your software repository.

If you are opposed to cloning that repository, you can simply create yourself a new repository that has the single following file distelli-manifest.yml in it.

distelli/onprem-log-pruner:
  Build:
    - echo "...Nothing to build..."
  Env:
    - LOG_DIRS: '( "/distelli/envs/agent-service-REGION-STAGE/logs" "/distelli/envs/backend-service-REGION-STAGE/logs" "/distelli/envs/dmon-REGION-STAGE/logs" "/distelli/envs/proxy-REGION-STAGE/logs" "/distelli/envs/web UI-REGION-STAGE/logs" )'
    - LOG_EXPIRE_DAYS: '30'
    - LOG_SLEEP_SECONDS: '1800'
    - LOG_DO_IT_FOR_REAL: 'false'
  Exec:
    - echo "Starting Log Pruner"
    - while true
    - echo "Pruning"
    - do
    -     for LOG_DIR in "${LOG_DIRS[@]}"
    -     do
    -         'echo "LOGDIR: $LOG_DIR"'
    -         find $LOG_DIR -name *.log.gz -mtime +$LOG_EXPIRE_DAYS
    -         if [ "$LOG_DO_IT_FOR_REAL" = true ] ; then
    -             echo "Deleting above files!"
    -             echo "---------------------"
    -             find $LOG_DIR -name *.log.gz -mtime +$LOG_EXPIRE_DAYS -exec rm {} \;
    -         fi
    -     done
    - sleep $LOG_SLEEP_SECONDS
    - done
    - 'true'

After you have cloned or created the repository, you will have to edit the distelli-manifest.yml and set the LOG_DIRS environment variable. Note, you can override the environment variables in the Pipelines web UI application environment.

Warning: This script will indiscriminately delete files in the LOG_DIRS that are 30 days old with the extension log.gz. If you set the LOG_DIRS to the wrong directory, you may damage your server or the Pipelines install.

When initially setting this up, it will not actually delete anything but report what is to be deleted. This way you can validate that things are setup correctly before the actual deletion.

Create Pipelines application

Next you will want to create an application in the root D1 account in Pipelines for Applications. This will be connected to the above created repository.

Create Pipelines application environment

After the application is created, you will need to create an environment to deploy the application. The servers in this environment should be your actual Pipelines instances that are running the Pipelines services with logs, since this is where the onpremise log pruner must run.

Pruner Environment

Set environment variables

You may opt to override the environment variables in the distelli-manifest.yml. This can be done in the application environment.

The values for LOG_EXPIRE_DAYS and LOG_SLEEP_SECONDS are specifically set to the appropriate values for Pipelines logs.

You should wait to set LOG_DO_IT_FOR_REAL to true until you have verified the appropriate LOG_DIRS have been set.

Pruner Environment

Deploy the pruner

When you added the application, a build should have initiated and created a release. If not, ensure you create a release by building the application.

Now deploy the release to the environment.

You can see in the STDOUT LOGS of the deployment, the files that would be deleted. To actually delete the files you will have to set LOG_DO_IT_FOR_REAL to true.

Pruner Deploy false

When you are ready, set LOG_DO_IT_FOR_REAL to true and re-deploy the application to the environment. The log prunner will now automatically run and prune files for you. You can see the files that are deleted.

Pruner Deploy true

You can leave the pruner running and check on its status at any time in the STDOUT logs.

Back to top
The page rank or the 1 our of 5 rating a user has given the page.
The email address of the user submitting feedback.
The URL of the page being ranked/rated.