DevOpsy-Turvy Blog

Scattered thoughts on work, life, Universe and everything

Shell Scripts Revisited. Again.

| Comments

Woot, shell again?!

While there are indeed many higher-level scripting languages out there which are much better than shell in terms of maintenance, code reuse and readability, one cannot discard the fact that there are still many shell scripts on your average system:

  • scripts executed during boot time
  • installation scripts (package pre- or post-install)
  • program wrappers (old Java in my opinion is notorious for using shell crutches for loading itself up)
  • and so on

Shell scripts are used for their simplicity, minimal dependency requirements and fast loading times – and while all of the three mentioned points could be argued to death, following some simple shell scripting design guidelines can make all of that happen.

A really short history of shell

In the beginning was the shebang, and the shebang was /bin/sh

While it is true that the basic shebang is indeed /bin/sh there had never been a “common” shell it was referring to.

#!/bin/sh
#
# Here starts the script

In 1977 Stephen Bourne from Bells Labs created a basic shell interpreter which provided enough features for it to become a scripting language of choice popularized in B. Kernighan and R. Pike’s book “The UNIX Programming environment”, and its commands and features have since been extended and altered in subsequent incarnations throughout various UNIX flavors. One of the major consequences to this was the fact that scripts written under assumption to be executed by a particular version of shell (or its flavor, for that matter) could fail when being run in another environment, which exposed a different shell interpreter under the same /bin/sh name. (I am leaving out of the scope the so-called ‘C-shell’ – a phenomenon arisen due to deficiencies in the early versions of Bourne shell – which in my opinion created more troubles than it tried to solve.)

POSIX standard has been developed in 1988, having one of its goals to finally come to an agreement on a subset of accepted features throughout the diversity of UNIXes and shells used in them. It is my personal opinion that to get higher chances of creating a code runnable on the larger number of systems while not compromising on the features, it is a much better policy to adhere to the POSIX standard rather than sticking to any particular pre-historic version of an interpreter.

Shell scripting guidelines

A lot has been written on the subject and obviously all authors have various – sometimes conflicting – views, so the following is just an attempt to summarize the knowledge I derived from these sources in a set of guidelines I worked out for myself from the lessons of implementing the theory in practice.

Do not use “bashisms”

In my opinion, Bash has become so popular mainly due to the reasons it’s been included as a default shell into many Linux distributions (and OS X as well!) and a for so many extensions and additions it’s got throughout its development cycle. Although Bash is so much widespread now that it could be argued whether one should really shy away from its specific syntax and the power it brings to some of the language constructs, in my opinion it is still worth the effort to stay within the safe harbor of POSIX compliance.

I think I’ve used most of the Bash advanced features in my early shell scripting only to become convinced at a later stage that in the majority of the cases they do not bring enough benefit to risk portability and – sometimes – even readability of the script.

Arrays

While arrays might seem like a natural data structure to use in solving different problems, I found I could go completely without them. If I wanted to enumerate some values stored in a variable, set -- "$variable" builtin assigns them to positional parameters. Accordingly, if I need to collect the properties of some identity distributed across different records – I’d simply use a structure that allows me to employ the set built-in again.

Taking an /etc/password file for example, I could easily parse it like follows:

(parse_passwd_file_with_set) download
1
2
3
4
5
6
7
OIFS=$IFS
while read record; do
    IFS=":"
    set -f -- $record
    echo "username=$1, comment=$5, shell=$7"
    IFS=$OIFS
done < /etc/passwd

And -f option to set is there to prevent path expansion for the fields in password file that might contain *. Note, however that whenever any field is left blank, positional parameters will get shifted. Any more complex data processing would make me seriously think of using a tool that is better suited for the task like awk, Perl or Ruby.

Associative arrays

Apparently in Bash starting from version 4.0 one can use associative arrays or hashes as they are called in other higher-level languages.

1
2
3
$ newhash[name]="some var"
$ echo ${newhash[name]}
some var

Any structure like that could be mimicked easily by a combination of a variable holding a set of records and the mentioned set built-in. I just need to use two separators: one for the records inside the variable and another – for fields within a record. A password file example from above is a good demonstration of the technique.

Indirect expansion

Again introduced in Bash 4.0 I believe, this fancy name describes referencing a variable using its name stored in another variable. Best illustrated in the following example:

1
2
3
4
$ somevar=somevalue
$ varname=somevar
$ echo ${!varname}
somevalue

That’s easily done using my old friend and foe eval in POSIX-compliant shells:

1
2
3
4
$ somevar=somevalue
$ varname=somevar
$ eval "echo \$$varname"
somevalue

There were times I really overused the eval plugin for no good reason but doing all sorts of tricks like the one described above. As of now I truly believe this functionality is rarely necessary.

test-like built-in

Apparently inherited from the Korn shell, this is the ability to do various comparisons and testing by means of the internal built-in without having to resort to external commands like test or expr. Here are a couple of examples:

1
2
3
[[ "$var1" == "$var2" ]]
[[ -f "$file" ]]
[[ "$var" =~ "^a.*z$" ]]

Again, there is a little gain in speed for what becomes a compatibility issue.

All but the regex-matching cases with the new syntax can be easily replaced with their single-bracket counterparts that correspond to calling the test command. There are a few that could still be handled by shell internally being compatible with the POSIX standard, like testing for an unset variable. The syntax will be different though:

1
2
3
4
# Break execution and print error message if variable is unset
: ${var:?error message}
# Set a variable to some default value in case it is unset
: ${var:=default value}

Full regex matching, on the other hand could either be done with an external command expr that like test has been around for quite a while, or where one can get away with a simpler shell glob matching – by using a far more readable case built-in:

1
2
3
4
5
6
7
8
case $var in
  ab*) : # action
  ;;
  cde[fgh]) : # another action
  ;;
  *) : # catch-all or default action
  ;;
esac

The doubly ;; case clause closing could be also placed on the same line with the pattern match, i.e. *) : ;;

Bash extensions of parameter expansion

The parameter expansions mentioned in the previous section are all covered by the POSIX standard, along with those for truncating a suffix or prefix in a variable value: ${var%suffix} and ${var#prefix}. However not the one used in Bash for substituting a matched pattern in a string: ${var/pattern/replacement}. For someone like me who has been used to this extension as something that’s given, it might be quite an unpleasant fact. For what it’s worth though, simple substitutions could be mocked by a combination of suffix and prefix truncations, while more complex tasks should be delegated to the tools that’s got string manipulation at their core: sed, awk or perl.

I have already mentioned some other Bash-specific parameter expansions earlier in the ‘Indirect expansion’.

Brace expansion

Apart from parameter expansion, Bash also has a feature called “Brace expansion”, which allows one to generate string sequences using shortcuts like:

  • {1..10}: expands to a sequence of numbers from 1 to 10
  • {a..z}: expands to a sequence of letters from ‘a’ to ‘z’
  • {one,two,three}: expands to one two three

Those are quite handy on the command line, but their usage in scripts (except for specific Bash command completion scripts) is undesirable for obvious reasons.

source and the dot operator

Bash has inherited many things from a Korn shell, and one of them is the ability to use the source command alongside the traditional . (dot) operator for importing function libraries from external files. source works fine in Bash and in ksh, but it will fail in the minimalistic shell interpreters like dash.

I agree that source is more visible in a script then a dot, but it is also less portable.

Use of local variables

That is a somewhat controversial topic, as POSIX standard does not have any provision for variables local to a function: there seem to have been early debates on whether to include them, but in the end it was decided to leave them outside of the standard, allowing for an optional reservation of the keyword local in POSIX-compliant shells. I guess among the things which contributed to the issue were the facts that in the original Bourne shell there was no other scope for variables but the global scope, while the later implementations use different keywords for variable’s local scope: depending on the flavor of shell interpreter, that could be a typeset in Korn shell, local in Almquist shell and its successor dash and declare and all of the above mentioned in Bash. To add even more confusion, the Korn shell ‘93 creators in what I regard a contrived effort of conformance to standards, made the typeset keyword to express the desired effect on a variable only in a function that is defined using a ksh-specific syntax function <name> {}. In case a variable is prefixed with typeset within a POSIX-compliant function definition name() {} it -silently – becomes global.

In my studies I did come across advices of staying away from using local variables altogether, and to use sub-shells in the cases where they are “truly required” embracing the function body in parentheses instead of the curly braces:

1
2
3
4
5
# example of a function executing in a subshell
function_name()
(
  : # function body
)

I see two problems with this approach however: first, while this type of command block might look like a function it’s no different to a standalone script, but is much harder to spot in the code, hence I consider it an abuse of parentheses. (One of the side effects that assigning a value to a variable which is supposed to be global will have no effect). Second, I really struggle to differentiate the cases where I need the local variables from those where I do not. They are either there or not, and I’d rather have them then miss them.

Another workaround of the imposed absence of the local variables – we are still in the 1970s, right? – is to pretend all variables are global ones and append some pseudo-random prefixes (say a function name) to those which would otherwise be local in the 21st century scripts. I do not reckon it’s a good approach either, speaking from the point of code readability and maintainability.

Why do I want to use local variables? One apparently obvious reason to me is to avoid the otherwise highly probable global namespace clashes: where say the i counter in a loop gets some unexpected assignments from within a function call. Other perhaps less evident reasons are to be able to make it visible that some shell-specific variable like record separator IFS is local to a function, so whatever undergoing changes will never affect the calling script itself.

Well how to use local variables and still make sure the code is portable across all POSIX compliant shells? I think a good start is to use the less ambiguous keyword for scoping a local variable, and among those mentioned local seems to have the transparency one could never beat with anything better. My guess is that Kenneth Almquist had similar considerations choosing this keyword for his shell. Any later derivatives of the ash (like the default dash in Debian systems) will recognize the local keyword properly, and Bash will also accept it as valid. The only problematic is the Korn shell, ksh which allows nothing else but the typeset for this task. Besides as is already noted, typeset is only effective when used in a ksh-specific function definition, so a simple substitution of the local keyword by its counterpart typeset" will not be of much help here:

(local_to_typeset) download
1
2
3
4
5
6
7
8
9
10
11
12
my_func()
{
    # Using the same variable in a local context
    typeset var="local value";
    echo "var in my_func is '$var'"
}

# Assigning a global variable 'var'
var="main"

my_func
echo "var in main is '$var'"

Watch how the Korn shell assigns a value to a global variable instead of the presumingly local one:

1
2
3
$ ksh local_to_typeset
var in my_func is 'local value'
var in main is 'local value'

However I can turn the enemy to be my friend by using a -f option of the typeset command, which outputs all functions accessible to a script along with the function bodies – in other words, ready to be sourced by a script. Combined with a sed one-liner, here’s how one can easily convert all function definitions to a ksh-format:

(convert_to_ksh) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
my_func()
{
    # Using the same variable in a local context
    local var="local value";
    echo "var in my_func is '$var'"
}


PROG=${0##*/}
TMPDIR="/tmp/$PROG.$$"
mkdir -p "$TMPDIR"

trap 'rm -rf "$TMPDIR"' 0

# Assigning a global variable 'var'
var="main"

if test_func() { local var; }; test_func 2>/dev/null; then
    :
else
    echo "Converting all functions to 'ksh' format">&2
    alias "local"="typeset"
    tempfile="$TMPDIR/tmpconvert2kshXXXXXXX"
    typeset -f | sed -e '/^[^"#]*()/ {s/()//; s/^/function /; }' > "$tempfile"
    . "$tempfile"
fi

my_func
echo "var in main is '$var'"

Let’s test it:

1
2
3
4
5
6
7
8
$ ksh convert_to_ksh
Converting all functions to 'ksh' format
var in my_func is 'local value'
var in main is 'main'

$ dash convert_to_ksh
var in my_func is 'local value'
var in main is 'main'

This kind of snippet will need to be executed in a script after all shell function libraries have been imported (or sourced, using the scripting jargon). Condition trigger for such an action as one can see could be a failure to call a simple test function with a local variable declared in it.

Armed with this trick, I think I cover all bases and am able to write POSIX-compliant shell scripts without compromises over the use of local variables.

Use variables instead of literals

Using a literal inside a code, be it a string literal or a number is generally considered a bad practice, yet I encounter them quite often in shell scripts. (Say, using a /path/to/a/file or some magic number 5 repeatedly throughout the script). First and foremost doing this usually renders the code non-reusable for some other similar task. Second, there is a much higher probability of a typo creeping in which will also be much harder to find. If one makes a typo in a variable name, passing a -u option to a shell (either in the shebang or using the set builtin) reveals any unset variables – so any erroneously typed in names will pop out quickly. Mistyped literals are much harder to track down. Third, a literal is very often not self-descriptive, whilst a variable name, if chosen wisely, becomes the first and most essential documentation layer, making the code much easier to read.

So any time another literal comes into the code, I assign it to a variable and refer to it using the variable name.

Always quote variable assignments

Some shell interpreters like Bash are more forgiving than the others, and will allow me to assign an unquoted multi-word value obtained as a result of a command expansion to a local variable inside a function:

(unquoted_assignment_in_function) download
1
2
3
4
5
6
7
8
9
my_func()
{
    local file=$1
    local var=$(ls -l $file)

    echo "$var"
}

my_func $1

Bash is happy:

1
2
$ bash unquoted_assignment_in_function sample_output
-rw-r--r--  1 roman  staff  39 17 Jun 10:07 sample_output

Shells that traditionally cause most of the troubles, such as dash will complain upon such an assignment throwing not a very helpful error though:

1
2
$ dash unquoted_assignment_in_function sample_output
unquoted_assignment_in_function: 4: local: 1: bad variable name

Interestingly enough, removing the local declaration makes dash happy again. I reckon it is better to be safe than than sorry and try to always quote the right-hand of an assignment regardless whether it is done to a local variable or not. Also makes my coding style more consistent I guess:

1
var="$(value)"

Always initialize local variables

In some shells like dash, if there is a global variable with the same name as an uninitialized local variable, the latter will inherit its value. Watch out:

(init_local_vars) download
1
2
3
4
5
6
7
8
func()
{
    local var;
    echo $var
}

var="global"
func

will produce global. The following assignment inside the func function will do the trick:

1
local var=;

Use command builtin to lookup a command or a function

POSIX has standardized the command built-in with the following 3 options: -p, -V and -v. The

  • -p – will search for a specified command in the PATH and execute it, if found
  • -V – lookup a specified command in the PATH, in builtins, functions and aliases and print out a verbose message what it resolves to (exactly same as type)
  • -v – lookup is done similarly to -V, but the result (if positive) is simply the name of the command itself or the alias definition

The latter form can be used quite effectively for testing whether a command or a function is available for execution by a script. Effectively it could be used everywhere where an external command which would otherwise be used – and also instead of a less portable typeset which is only available in Bash and Korn shells.

Use set -e at the start of a script

One of the annoying default behaviors of a shell interpreter is its neglect of the return codes of individual command statements of the currently executed script. It allows the script to continue its doomed run even in case something fatal has already happened.

Advocates of the so-called “defensive shell programming” almost unanimously say that one should not rely on the fact that a command statement just prior to the one you are coding at the moment will not fail. And I have been rigorously following this sound advice by checking the exit states and using constructs like “do or die”:

command || die "command has failed"

Where die would obviously be a custom function printing out the supplied message and exiting the script with a non-zero return code. This is certainly shorter and more reliable then checking an exit status using $? variable, as I’ve seen so many cases where one would insert a command in between the action and the check (say for debugging purposes) only to see the script to start processing in completely unexpected ways.

However even an easier way to make sure the shell would halt a script where one of the statements has failed is to use the -e flag passed on to the shell. It could be done in two basic ways: either using the -e option in the shebang line, or explicitly calling set -e within the script body.

Now I can get rid of the || die ... endings everywhere and let myself concentrate on the logical flow of my script rather than paranoid checks “what if my previous statement has failed”.

The flaws? Well, first I am still not protected from the command failing within a pipe: command1 | command2 will allow the trouble to roll on in case the command1 was not a success. Second, by not using my custom die function I am relying on a meaningful error reporting of the command that returned a nonzero exit code and broke the execution of my script. And if it’s too shy to tell me why it has failed I might be pondering for a while observing my script stopping at a place I did not expect it to, trying to locate the place where the actual failure occurred. Unlike other more advanced languages, shell has no means of the propagating errors up the stack. Well, there is a kind of obscure way by using a trap on the signal 0, but my best bet in terms of a pointer to a failing location is a global variable of some sort that is set to some hinting value throughout the code. Function name could be a good candidate for this. But then again, one needs to reset the variable every time a control flow returns from a function which is a tedious job and relying on a ‘magical global variable’ is not a good thing anyway. It seems that the POSIX guys thought about something like that by standardizing the $LINENO variable, but the devil is in implementation and some shells like dash or ksh will not give you a meaningful value inside the trap:

(test_lineno) download
1
2
3
4
5
6
7
8
trap 'if [ $? -ne 0 ]; then echo "Error detected: SOME_VAR is $SOME_VAR, and LINENO is $LINENO"; fi' 0

set -e

SOME_VAR="before the echo"
echo "I am on $LINENO now"
SOME_VAR="before the false"
false
1
2
3
4
5
6
7
$ bash test_lineno
I am on 6 now
Error detected: SOME_VAR is before the false, and LINENO is 8

$ dash test_lineno
I am on 6 now
Error detected: SOME_VAR is before the false, and LINENO is 1

If you were wondering, the set -e will also correctly exit upon any error happening in the command expansion assignment:

1
var="$(command_generating_output)"

And that means it is no longer possible to check the output of a failed command in the statement that follows. What I mean here is that one should rewrite the following logic:

1
2
3
4
var="$(command_generating_output)"
if [ -n "$var" ]; then
    : # do something
fi

so that a call to the command generating the output is wrapped inside conditional itself, for example, like this:

1
2
3
if  command_generating_output > "$tempfile"; then
    : # process the tempfile
fi

Another option is to use “here-scripts” which I discuss just after I describe the pipes “fall-through” behavior. I am wary of the constructs like follows, which are unfortunately quite common:

1
var="$(command1 | command2)"

If command1 fails, the pipe will still pass on the execution along with the output of command1 to command2. There are two alternatives to pipe to use at this point instead:

  • temporary files to store and process intermediary output
  • “here-scripts” for rescue

Here’s a concept of the “here-scripts” which has apparently got some fancier name I always tend to forget:

(test-here-doc-exec) download
1
2
3
4
5
6
7
8
9
10
11
12
13
my_function()
{
    "$1" # Command to run prior to the block execution
    {
        [ $? -eq 0 ] || return 1
        # Do something with the output
        cat -
    }<<EOF_COMMAND_PRODUCING_AN_OUTPUT
$(cat "$2")
EOF_COMMAND_PRODUCING_AN_OUTPUT
}

my_function "$1" "$2"

Its structure might seem kind of topsy-turvy, but the obvious benefit is a hassle-free management of the temporary files which shell handles for me behind the scenes: a temporary file to store the output of the command in the here-document block is automagically created by the shell interpreter and is destroyed by it once the execution leaves the scope of the “inner”, or processing block. What’s more, this file seems to be accessible only by the process that executes the current shell script which brings in additional fuzzy feeling of security and non-conflicting processing.

Here-scripts have been my preference for a long while until I discovered an incorrect behavior in one of the POSIX-compliant shells – the dash. There, a check for a success or failure of the here-script in the beginning of the processing block inside the {} brackets would always return the exit status of the command preceding that block. Ouch! Watch this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat >sample_file <<EOF_SAMPLE_FILE
Here is a text file
with 2 lines in it
EOF_SAMPLE_FILE

$ bash test-here-doc-exec false sample_file; echo $?
Here is a text file
with 2 lines in it
0

$ dash test-here-doc-exec false sample_file; echo $?
1

$ dash test_here_doc_exec true sample_file; echo $?
Here is a text file
with 2 lines in it
0

Until this bug is fixed it seems as if the only way around the pipes (and command output processing in general) is using temporary files, which is the subject of the next section.

Using temporary files

There are a couple of points to consider when using temporary files in a script:

  1. Where to create them
  2. How to clean them up after they are no longer needed

While the first issue might be resolved by using the mktemp /tmp/some_nameXXXXXX command combined with a blind faith that a /tmp directory is present uniformly across all systems, a consistent clean up of the files can only be achieved by employing the trap mechanism, which ensures the files are deleted even in case the script has failed or been interrupted somewhere in the middle of its execution.

Trap could be placed on a number of signals sent when the executing script is either interrupted, aborted, stopped and so on. In any case it should be regarded as a global assignment applicable to the script as a whole, as I am unable to trap say a return from a function. So some sort of a convention needs to be in place for functions which reside outside the script but need to make use of the temporary files. The one I try to follow is to always use a prefix in the form of a global variable:

1
: ${TMPDIR:=/tmp}

This is a reasonable default to use. I may or may not try to clean up individual temporary files in a function, once they have been used, as it is oftentimes a tedious and unreliable task to accomplish. It’s best to leave that to the calling script that can employ a trap as described.

Use if-then-fi instead of [ condition ] && action constructs

I am restraining myself from using the constructs like

[ condition ] && action

in favor of

if [ condition ]; then
    action
fi

The reason being when set -e is used for the entire run of the script (and that is highly recommended), use of the logical && might lead to wrong fallouts from the script whenever a condition does not evaluate to true. I know, it seems bizarre from the description of the expected set -e behavior as described on the man page, however I stumbled upon that several times with dash and not going to risk it anymore. A more safe construct which I sometimes employ is

[ negated_condition ] || action

whenever it reads natural in the code. This one will always produce a zero return code if the action does not fail (and in case the action does fail, the fallout is actually welcomed): if the negated condition evaluates to true then the action is not executed by the virtue of the short-circuiting a conditional evaluation.

Deployment Libraries: Sharing Responsibility of the Deployment Process

| Comments

Rapid deployment and continuous integration on modern application development platforms

The pace at which software is developed and deployed in today’s world has rapidly increased due to the shortened iterations, automated testing and models of continuous integration implemented to various degrees throughout the stages of the development cycle. In such environments a source repository itself often becomes the package where its version would be either a tag or a even a hash of the latest commit. The metadata about such a “package” could be stored in a plain text file at some location within a repo.

One of the application development platforms that follow this model closely is Node.js. While one is still able to create node packages in conventional sense, it is also possible to provide a dependency in the form of a Github repository URL with a package.json file sitting at the repository root, serving as a simple metadata about the package. It has a simple JSON format where the only required fields are the name and version of the package. Optional, but advisable are the fields like author, description, repository location and license. Within the files section one can easily specify the only files that are going to be “included” in the package:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
  "name": "package-name",
  "author": "My Name <email@domain.com>",
  "version": "1.0.1",
  "repository": {
    "type": "git",
    "url": "git://github.com/TopLevel/reponame.git"
  },
  "files" : [
    "filename1",
    "filename2"
  ],
  "description" : "Package description",
  "license" : "BSD"
}
  

Now running an npm install on for a Node.js app that has a dependency specified in its package.json like follows:

1
2
3
4
5
6
{
  "dependencies": {
      "package-name": "git+https://github.com/TopLevel/reponame.git",
      ...
  }
}

will pull down the filename1 and filename2 (along with the README.md and its own package.json file) into the node_modules/package-name subdirectory of the Node.js app being installed. During consequitive npm install runs the version field in the dependency’s package.json will determine whether its files will be re-downloaded from the repo or not. This mechanism proves to be quite useful in maintaining the software installation and deployment process.

Sharing responsibility of the deployment process

Software deployment is a borderline between the development and operations, and it is highly desirable that the Devs are aware of the process and even better – empowered to control it through the tools provided to them by the Ops.

It is not uncommon to have an environment where a majority of the apps have a very similar deployment process that could vary only in some little bits like creation of the files, ensuring particular users exist on a system, requesting authentication credentials from the existing running services and so on. By creating a reusable library of simple functions serving as a middle-layer between system level operations and the Devs trying to set up an environment for a successful deployment of their apps, one can separate the domains of environment specification from the its actual implementation. Or, in the layman language – the “what needs to be there” from the “how it is going to be done”. Accordingly, the app’s environment – dictated by the Devs – could now be configured by them directly in their installation scripts using the building blocks from the deployment toolset, in this case the library functions.

To minimize the maintenance burden and to avoid code duplication across the multitude of repositories, storing the deployment library separately from the apps themselves sounds like a good idea. In order to make use of the library employing its functions in an install script one would obviously need to pull it down from the repo, may be even as one of the first steps in the script itself. In case of the Node.js app that action will be done automatically by the npm install command, but there is nothing complex in replicating that same behavior for any non-Node.js app using just a git pull and a simple JSON parser.

Now the Devs can use the library functions to compose the installation scripts for their apps, while the Ops will gradually improve and extend the library over time. Feels like a win-win scenario? Well it is important to make sure that the same scripts are being used for app deployments in all environments, starting from the development one.