Jekyll2022-11-07T17:43:32+00:00https://p8952.info/feed.xmlp8952cat /dev/randomPeter WilmottTracing SIP messages using Tcpdump2019-06-27T00:00:00+00:002019-06-27T00:00:00+00:00https://p8952.info/2019/06/27/tracing-sip-messages-using-tcpdump<p>Tcpdump is a command line tool to capture contents of packets passing through a network interface.</p>
<p>SIP is a signalling protocol used for initiating, maintaining, and terminating real-time media sessions.</p>
<p>Since SIP is a plain text based protocol it’s really easy to trace directly using tcpdump without having to write packets to a file and then analyse them later using Wireshark.</p>
<p>To do this you can use the following command:</p>
<p><code class="language-plaintext highlighter-rouge">tcpdump -A -s 0 -n -nn -i any port 5060</code></p>
<p>The arguments do the following:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">-A</code> : Print each packet in ASCII</li>
<li><code class="language-plaintext highlighter-rouge">-s 0</code> : The number of byes to capture from each packet … setting to 0 sets it to 262144 bytes</li>
<li><code class="language-plaintext highlighter-rouge">-n</code> : Don’t convert host addresses to names</li>
<li><code class="language-plaintext highlighter-rouge">-nn</code> : Don’t convert protocol and port numbers to names</li>
<li><code class="language-plaintext highlighter-rouge">-i any</code> : Listen on interface … an argument of ‘any’ can be used to capture packets from all interfaces</li>
<li><code class="language-plaintext highlighter-rouge">port 5060</code> : Print all packets to and from port 5060</li>
</ul>
<p>After running this command you’ll see a real time display of SIP packets passing through the network interface:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>14:41:44.404801 IP 192.168.47.176.5060 > 192.168.47.122.5060: SIP: INVITE sip:12345@192.168.47.122 SIP/2.0
Eh....@.@.F.../.../z......d.INVITE sip:12345@192.168.47.122 SIP/2.0
Via: SIP/2.0/UDP 192.168.47.176:5060;rport;branch=z9hG4bK1064390103
From: <sip:peter@192.168.47.176>;tag=291367939
To: <sip:12345@192.168.47.122>
Call-ID: 2105648811
CSeq: 20 INVITE
Contact: <sip:peter@192.168.47.176>
Content-Type: application/sdp
Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, NOTIFY, MESSAGE, SUBSCRIBE, INFO
Max-Forwards: 70
User-Agent: Linphone/3.6.1 (eXosip2/3.6.0)
Subject: Phone call
Content-Length: 211
v=0
o=peter 193 3850 IN IP4 192.168.47.176
s=Talk
c=IN IP4 192.168.47.176
t=0 0
m=audio 7078 RTP/AVP 0 8 101
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=rtpmap:101 telephone-event/8000
a=fmtp:101 0-11
................
14:41:44.413779 IP 192.168.47.122.5060 > 192.168.47.176.5060: SIP: SIP/2.0 100 Trying
E..49...@._.../z../...... ..SIP/2.0 100 Trying
Via: SIP/2.0/UDP 192.168.47.176:5060;rport=5060;branch=z9hG4bK1064390103
From: <sip:peter@192.168.47.176>;tag=291367939
To: <sip:12345@192.168.47.122>
Call-ID: 2105648811
CSeq: 20 INVITE
User-Agent: FreeSWITCH-mod_sofia/1.6.19~64bit
Content-Length: 0
................
14:41:44.460851 IP 192.168.47.122.5060 > 192.168.47.176.5060: SIP: SIP/2.0 200 OK
E...9...@.] ../z../........FSIP/2.0 200 OK
Via: SIP/2.0/UDP 192.168.47.176:5060;rport=5060;branch=z9hG4bK1064390103
From: <sip:peter@192.168.47.176>;tag=291367939
To: <sip:12345@192.168.47.122>;tag=1Uyy4pFFS9N7p
Call-ID: 2105648811
CSeq: 20 INVITE
Contact: <sip:12345@192.168.47.122:5060;transport=udp>
User-Agent: FreeSWITCH-mod_sofia/1.6.19~64bit
Accept: application/sdp
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, INFO, UPDATE, REFER, NOTIFY
Supported: timer, path, replaces
Allow-Events: talk, hold, conference, refer
Session-Expires: 300;refresher=uas
Content-Type: application/sdp
Content-Disposition: session
Content-Length: 224
Remote-Party-ID: "12345" <sip:12345@192.168.47.122>;party=calling;privacy=off;screen=no
v=0
o=FreeSWITCH 1561585300 1561585301 IN IP4 192.168.47.122
s=FreeSWITCH
c=IN IP4 192.168.47.122
t=0 0
m=audio 61204 RTP/AVP 0 101
a=rtpmap:0 PCMU/8000
a=rtpmap:101 telephone-event/8000
a=fmtp:101 0-16
a=ptime:20
................
14:41:44.466941 IP 192.168.47.176.5060 > 192.168.47.122.5060: SIP: ACK sip:12345@192.168.47.122:5060;transport=udp SIP/2.0
Eh....@.@.G.../.../z........ACK sip:12345@192.168.47.122:5060;transport=udp SIP/2.0
Via: SIP/2.0/UDP 192.168.47.176:5060;rport;branch=z9hG4bK671645580
From: <sip:peter@192.168.47.176>;tag=291367939
To: <sip:12345@192.168.47.122>;tag=1Uyy4pFFS9N7p
Call-ID: 2105648811
CSeq: 20 ACK
Contact: <sip:peter@192.168.47.176>
Max-Forwards: 70
User-Agent: Linphone/3.6.1 (eXosip2/3.6.0)
Content-Length: 0
</code></pre></div></div>
<p>Once you’re done you can stop tcpdump using <code class="language-plaintext highlighter-rouge">Ctrl^C</code>.</p>Peter WilmottTcpdump is a command line tool to capture contents of packets passing through a network interface.Running Tensorflow with Docker on GCP2018-10-23T00:00:00+00:002018-10-23T00:00:00+00:00https://p8952.info/2018/10/23/running-tensorflow-with-docker-on-gcp<p>Provision Virtual Machine:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gcloud auth login
$ gcloud config set project machine-learning-000000 # Your project id
$ gcloud beta compute \
addresses create mlvm \
--region=us-east1 \
--network-tier=PREMIUM
$ MLVM_IP="$(gcloud beta compute \
addresses describe mlvm \
--region=us-east1 \
| head -n1 | awk '{print $2}')"
$ gcloud beta compute \
instances create mlvm \
--zone=us-east1-b \
--machine-type=n1-standard-2 \
--subnet=default \
--network-tier=PREMIUM \
--address="$MLVM_IP" \
--maintenance-policy=TERMINATE \
--no-service-account \
--no-scopes \
--accelerator=type=nvidia-tesla-p100,count=1 \
--image=centos-7-v20181011 \
--image-project=centos-cloud \
--boot-disk-size=40GB \
--boot-disk-type=pd-standard \
--boot-disk-device-name=mlvm
</code></pre></div></div>
<p>Configure Virtual Machine:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gcloud beta compute ssh user@mlvm
$ sudo su
$ cd ~/
$ curl https://download.docker.com/linux/centos/docker-ce.repo \
> /etc/yum.repos.d/docker-ce.repo
$ curl https://nvidia.github.io/nvidia-docker/centos7/nvidia-docker.repo \
> /etc/yum.repos.d/nvidia-docker.repo
$ yum install --assumeyes \
"@Development Tools" \
"kernel-devel-$(uname -r)" \
"kernel-headers-$(uname -r)" \
"docker-ce-18.06.1" \
"nvidia-docker2-2.0.3"
$ curl https://us.download.nvidia.com/tesla/396.44/NVIDIA-Linux-x86_64-396.44.run \
> NVIDIA-Linux-x86_64-396.44.run
$ sh NVIDIA-Linux-x86_64-396.44.run --silent
$ systemctl enable docker
$ systemctl start docker
$ docker run \
--runtime=nvidia \
-it \
--rm \
tensorflow/tensorflow:1.11.0-devel-gpu \
python -c "import tensorflow as tf; print(tf.contrib.eager.num_gpus())"
</code></pre></div></div>
<p>😄🙌🎉 … 🔥💰</p>
<p>Destroy Virtual Machine:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ exit # Exit from 'sudo su'
$ exit # Exit from 'gcloud beta compute ssh user@mlvm'
$ gcloud beta compute \
addresses delete mlvm \
--region=us-east1
$ gcloud beta compute \
instances delete mlvm \
--zone=us-east1-b
</code></pre></div></div>Peter WilmottProvision Virtual Machine:Benchmarking Ruby with GCC (4.4, 4.7, 4.8, 4.9) and Clang (3.2, 3.3, 3.4, 3.5)2014-12-12T00:00:00+00:002014-12-12T00:00:00+00:00https://p8952.info/ruby/2014/12/12/benchmarking-ruby-with-gcc-and-clang.md<p>This post is partially inspired by <a href="http://cirandas.net/brauliobo/blog/ruby-compiled-with-clang-is-8-faster-than-with-gcc-4.9-and-44-faster-than-with-gcc-4.7.2">Braulio Bhavamitra’s comments about Ruby being faster when compiled with Clang rather than GCC</a> and partially by <a href="https://www.usenix.org/conference/lisa13/technical-sessions/plenary/gregg">Brendan Gregg’s comments about compiler optimisation during his Flame Graphs talk at USENIX LISA13</a> (0:33:30).</p>
<p>In short I wanted to look at what kind of performance we are leaving on the table by not taking advantage of 1) The newest compiler versions & 2) The most aggressive compiler optimizations. This is especially pertinent to those of us deploying applications on PaaS infrastructure where we often have zero control over such things. Does the cost-benefit analysis still work out the same when you take into account a 10/20/30% performance hit?</p>
<p>All tests were run on AWS from an m3.medium EC2 instance and the AMI used was a modified copy of one of my weekly generated <a href="https://github.com/p8952/genstall">Gentoo Linux AMIs</a>. The version of Ruby was 2.1 while the tests themselves are from <a href="https://github.com/acangiano/ruby-benchmark-suite">Antonio Cangiano’s Ruby Benchmark Suite</a>. The tooling used to run them is <a href="https://github.com/p8952/ruby-compiler-benchmark">available on my GitHub</a> if you want to try this out for yourself.</p>
<p>The full test suite was run for each of the following compiler variants, O3 was not used with Clang since it only adds a single additional flag:</p>
<ul>
<li>GCC 4.4 with O2 – Ships with Ubuntu 10.04 (Lucid) & RHEL/CentOS 6</li>
<li>GCC 4.4 with O3</li>
<li>GCC 4.7 with O2 – Ships with Debian 7 (Wheezy) & Ubuntu 12.04 (Precise)</li>
<li>GCC 4.7 with O3</li>
<li>GCC 4.8 with O2 – Ships with Ubuntu 14.04 (Trusty) & RHEL/CentOS 7</li>
<li>GCC 4.8 with O3</li>
<li>GCC 4.9 with O2 – Ships with Debian 8 (Jessie)</li>
<li>GCC 4.9 with O3</li>
<li>Clang 3.2 with O2</li>
<li>Clang 3.3 with O2</li>
<li>Clang 3.4 with O2</li>
<li>Clang 3.5 with O2</li>
</ul>
<p>Each variant was then given a number of points per test based on its ranking, 0 points to the variant which performed the best, 1 to the second best, and so on until 11 points were given to the variant which performed the worst.</p>
<p>These scores were then added up per variant and plotted onto a bar graph to try and visualize performance per variant.</p>
<iframe height="650" style="width: 100%;" scrolling="no" title="Benchmarking Ruby With GCC - Graph Total" src="https://codepen.io/p8952/embed/preview/rpLjeX?height=265&theme-id=dark&default-tab=result" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true"></iframe>
<p>From this we can determine that:</p>
<ol>
<li>Your choice of compiler does have a non-negligible affect on the performance of your runtime environment.</li>
<li>Modern versions of GCC (4.7 & 4.8) and Clang (3.2 & 3.3) have very similar performance.</li>
<li>Clang 3.4 seems to suffer from some performance regressions in this context.</li>
<li>The latest version of GCC (4.9) is ahead by a clear margin.</li>
<li>All O3 variants expect GCC 4.8 performed worse than their O2 counterparts. This is not that unusual and very often using O3 will degrade performance or even break an application all together. However the default Makefile shipped with Ruby 1.9.3 and above uses O3, which appears to hurt performance.</li>
</ol>
<p>Of course the standard disclaimers apply. Benchmarking correctly is hard, you may not see the same results in your specific environment, do not immediately recompile everything in prod using GCC 4.9, etc.</p>
<p>Update:</p>
<p>Lots of people asked to see the raw data plotted as well as the relative performance, so here it is. For each test the average score for all varients was calculated as this was named as the baseline and marked as 0. Then for each test/varient a percentage was calculated showing how much faster/slower it was than the baseline.</p>
<p>For example on test eight GCC 4.9 O2 was 7% faster than the baseline while Clang 3.5 was 2% faster than the baseline. From this we can infer that GCC 4.9 O2 was 5% faster than Clang 3.5 in that test.</p>
<p>Since this makes the graph very cluttered it is best that you only select a few variants at once, you can also pan and zoom.</p>
<iframe height="650" style="width: 100%;" scrolling="no" title="Benchmarking Ruby With GCC - Graph Percent" src="https://codepen.io/p8952/embed/XVKeWq?height=265&theme-id=dark&default-tab=result" frameborder="no" loading="lazy" allowtransparency="true" allowfullscreen="true"></iframe>Peter WilmottThis post is partially inspired by Braulio Bhavamitra’s comments about Ruby being faster when compiled with Clang rather than GCC and partially by Brendan Gregg’s comments about compiler optimisation during his Flame Graphs talk at USENIX LISA13 (0:33:30).Listing EC2 instances in all regions2014-12-11T00:00:00+00:002014-12-11T00:00:00+00:00https://p8952.info/2014/12/11/aws-listing-all-regions<p>When working with EC2 instances across multiple regions I’ve found it’s near
impossible to get a good overview of what is running where. This can be
especially annoying when you are automatically launching a number of short
lived instances.</p>
<p>To prevent me having to go through 9 different web pages to see what I
currently have running I found it easier to just use the API and list active
instances from the CLI.</p>
<p>Install dependencies:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gem install aws-sdk pmap
</code></pre></div></div>
<p>/usr/local/bin/aws-list:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/usr/bin/ruby
require 'aws-sdk'
require 'pmap'
def ec2(region = 'us-east-1')
ec2 = AWS::EC2.new(
access_key_id: ENV['AWS_ACCESS_KEY'],
secret_access_key: ENV['AWS_SECRET_KEY'],
region: region
)
ec2
end
def list_instances
instances = []
ec2.regions.peach do |region|
ec2.regions[region.name].instances.peach do |instance|
next if instance.status == :terminated
instances << instance
end
end
instances
end
list_instances.peach do |instance|
puts "#{instance.id}\t\t#{instance.availability_zone}\t\t#{instance.status}\t\t#{instance.ip_address}\n"
end
</code></pre></div></div>
<p>Listing instances:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ export AWS_ACCESS_KEY="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
$ export AWS_SECRET_KEY="ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890"
$ aws-list
i-16b78754 eu-west-1a running 54.77.218.113
i-0025e1e6 eu-west-1a running 54.76.129.127
i-3926e2df eu-west-1a running 54.154.52.146
i-4924e0af eu-west-1a running 54.154.52.77
i-c424e022 eu-west-1a running 54.72.131.127
i-0c25e1ea eu-west-1a running 54.154.51.140
i-9c25e17a eu-west-1a running 54.154.49.204
i-4b24e0ad eu-west-1a running 54.77.225.135
i-33e929f2 eu-central-1b running 54.93.164.233
i-c324e025 eu-west-1a running 54.76.98.165
i-3f26e2d9 eu-west-1a running 54.154.47.126
i-8027e366 eu-west-1a running 54.154.20.140
i-0d25e1eb eu-west-1a running 54.77.100.132
i-d718edd9 us-west-2c running 54.149.35.63
i-0c2028e6 us-east-1a running 54.164.193.104
i-5b95e54e sa-east-1a running 54.94.165.7
i-2dad38de ap-northeast-1a running 54.65.157.129
i-625a80af ap-southeast-1a running 54.169.195.201
i-a06e006f ap-southeast-2a running 54.66.184.34
i-2dbce5e5 us-west-1a running 54.67.67.18
</code></pre></div></div>Peter WilmottWhen working with EC2 instances across multiple regions I’ve found it’s near impossible to get a good overview of what is running where. This can be especially annoying when you are automatically launching a number of short lived instances.Installing Vagrant in non-supported environments2014-12-08T00:00:00+00:002014-12-08T00:00:00+00:00https://p8952.info/2014/12/08/vagrant-non-supported-install<p>Get sources:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone git@github.com:mitchellh/vagrant.git
$ cd vagrant
$ git checkout tags/v1.6.5
</code></pre></div></div>
<p>Install dependencies:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ gem install bundler -v '< 1.7.0'
$ bundle install
</code></pre></div></div>
<p>Patch Vagrant<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup><sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup><sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>diff --git a/bin/vagrant b/bin/vagrant
index 21630e1..5e24279 100755
--- a/bin/vagrant
+++ b/bin/vagrant
@@ -66,6 +66,8 @@ end
# Setup our dependencies by initializing Bundler. If we're using plugins,
# then also initialize the paths to the plugins.
+load_path = []
+$LOAD_PATH.each { |path| load_path << path }
require "bundler"
begin
Bundler.setup(:default, :plugins)
@@ -94,6 +96,7 @@ rescue Bundler::VersionConflict => e
$stderr.puts e.message
exit 1
end
+load_path.each { |path| $LOAD_PATH.push(path) unless $LOAD_PATH.include?(path) }
# Stdout/stderr should not buffer output
$stdout.sync = true
@@ -164,11 +167,6 @@ begin
logger.debug("Creating Vagrant environment")
env = Vagrant::Environment.new(opts)
- if !Vagrant.in_installer? && !Vagrant.very_quiet?
- # If we're not in the installer, warn.
- env.ui.warn(I18n.t("vagrant.general.not_in_installer") + "\n", prefix: false)
- end
-
begin
# Execute the CLI interface, and exit with the proper error code
exit_status = env.cli(argv)
diff --git a/lib/vagrant/bundler.rb b/lib/vagrant/bundler.rb
index 05867da..54f9fb8 100644
--- a/lib/vagrant/bundler.rb
+++ b/lib/vagrant/bundler.rb
@@ -18,8 +18,7 @@ module Vagrant
end
def initialize
- @enabled = true if ENV["VAGRANT_INSTALLER_ENV"] ||
- ENV["VAGRANT_FORCE_BUNDLER"]
+ @enabled = true
@enabled = !::Bundler::SharedHelpers.in_bundle? if !@enabled
@monitor = Monitor.new
---
</code></pre></div></div>
<p>Test and install:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ rake test:unit
$ rake install
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ vagrant status
You appear to be running Vagrant outside of the official installers.
Note that the installers are what ensure that Vagrant has all required
dependencies, and Vagrant assumes that these dependencies exist. By
running outside of the installer environment, Vagrant may not function
properly. To remove this warning, install Vagrant using one of the
official packages from vagrantup.com.
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ vagrant plugin install vagrant-aws
Installing the 'vagrant-aws' plugin. This can take a few minutes...
Vagrant's built-in bundler management mechanism is disabled because
Vagrant is running in an external bundler environment. In these
cases, plugin management does not work with Vagrant. To install
plugins, use your own Gemfile. To load plugins, either put the
plugins in the `plugins` group in your Gemfile or manually require
them in a Vagrantfile.
</code></pre></div></div>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>Without this patch Vagrant will give the following warning: <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Without this patch Vagrant will give the following error: <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>Without this patch Vagrant will give an <a href="https://github.com/mitchellh/vagrant/issues/5172">error when running in a directory containing a Gemfile</a>. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Peter WilmottGet sources: $ git clone git@github.com:mitchellh/vagrant.git $ cd vagrant $ git checkout tags/v1.6.5VMware ESXi MEMORY_SIZE_ERROR2014-04-22T00:00:00+00:002014-04-22T00:00:00+00:00https://p8952.info/2014/04/22/vmware-esxi-memory-limit<p>VMware’s ESXi 5.5 increases the recommend memory requirement from 4GB to 8GB,
their own System Requirements document stating that:</p>
<p>“You have 4GB RAM. This is the minimum required to install ESXi 5.5. Provide at
least 8GB of RAM to take full advantage of ESXi features and run virtual
machines in typical production environments.”</p>
<p>However when installing ESXi on a system with 4GB of RAM you will receive an
error along the lines of:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><MEMORY_SIZE ERROR: This host has 3.71 GiB of RAM. 3.97 GiB are needed>
</code></pre></div></div>
<p>You’ll notice that the people writing the System Requirements document are using
the SI unit of gigabyte (GB) while those writing the ESXi installer are using
the binary unit of gibibyte (GiB). As such ESXi does not require 4,000,000,000
bytes of RAM but 4,294,967,296.</p>
<p>Luckily the fix is easy, we can modify the amount minimum amount of RAM the
installer checks for and it will install without issue.</p>
<p>Switch to the virtual terminal by hitting Alt+F1 and login as ‘root’ with the
password field blank.</p>
<p>After logging in you need to tweak the permissions on upgrade_precheck.py:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd /usr/lib/vmware/weasel/util/
$ rm upgrade_precheck.pyc
$ cp upgrade_precheck.py upgrade_precheck.py.tmp
$ cp upgrade_precheck.py.tmp upgrade_precheck.py
$ chmod 777 upgrade_precheck.py
</code></pre></div></div>
<p>Open up upgrade_precheck.py in vi and replace:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>MEM_MIN_SIZE = (4 * 1024 - 32) * SIZE_MiB
</code></pre></div></div>
<p>With:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>MEM_MIN_SIZE = (2 * 1024 - 32) * SIZE_MiB
</code></pre></div></div>
<p>Then restart the ESXi installer by killing the weasel process.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ps -c | grep weasel
$ kill 12345
</code></pre></div></div>
<p>You will automatically get switched away from the virtual terminal and can
continue the installation.</p>Peter WilmottVMware’s ESXi 5.5 increases the recommend memory requirement from 4GB to 8GB, their own System Requirements document stating that: