The Zero-Trust proxy hype - can we reverse-proxify everything and ditch VPNs?

WPN - Web Private Network


After Goog recently announced that some parts of the org now consider Web portals “secure enough” for certain systems to act as (web) gateways, people started to wonder: “Are we going for perimeter-less security; now?” {1} Now!?
Of course, people… here and there… aren’t able to articulate this properly.

What our security circus, desperately in need of new entertainment, has been able to tweet and blog about was, that most corps these days don’t think of the perimeter as they used to. That’s news for some news.
– There are no red lines in a modern network (architecture) any more. Cloud, Site-to-Site VPNs, Virtual Private Networks (VPNs), Hybrid colocation data-center architectures, auto-scaling, Micro-Service clusters… (Office) SaaS… Innovation pretty much blew the perimeter into bits and pieces. It’s dead.

Identity and Access Management is the new perimeter

Identity and Access Management (IAM) is the new perimeter {2}. – Typically accompanied by features like Role Based Access Management (RBAC) {3}, Multi-Factor Authentication (MFA) and Identity Federation (with Goog, Facebook, GitHub, Azure… OAuth2).

Modern enterprise environments have SaaS products, like Office 365 or Goog Apps. Within these you usually find Federation Services like Microsoft Active Directory Federation (ADFS), or via RedHat projects like KeyCloak to sign up the departments. To keep the Joiner & Leaver processes in sync with the systems. To centralise access management. The usual drill.

IAM is the new perimeter AND the new VPN

End users may not like to use a VPN with MFA to access their remote business apps for collaboration. They get a mail with a link to an “internal system”, and they cannot just open it directly on the mobile. They are lost. They may be on the road, in traffic, hiking… and just want to read up on certain things…

The traditional perimeter is dead for certain sections of the internal tools. People need tools everywhere. They must be made available. The more intelligent they get, the easier they need to be accessible. The people, and the tools. That’s the ROI of Zero Trust.

History Lessons: Reverse Proxies

The early idea was to reverse-proxify the remote app, and to use a Central Auth like Active Directory or LDAP. We put the Reverse Proxy up on a public IP, set some DNS names to that IP, and the Reverse Proxy handled the

  • SSL / TLS termination (including SNI)
  • WAF features, also Anti-BruteForce
  • Logging (Access Logs)
  • and of course Basic Auth - the problem

Basic Auth has some issues:

  1. browsers may not handle expiration very well. Zero Trust proxies are supposed to do this in a better way. Therefore they don’t use Basic Auth.
  2. Basic Auth headers may cause problems with Web Sockets; and let’s ignore the issues with Web Sockets and WAFs. In short: Zero Trust proxies must support Web Sockets.{5}
  3. Getting LDAP to work with typical web servers like Lighttpd, Apache2, Nginx can be difficult. Often you can only point to the IP of the LDAP endpoint. What if that AD server is on maintenance? Zero Trust proxies must provide a better integration here
  4. User Agent whitelisting is very difficult for Nginx. You need to compile a module for this on Linux. Zero Trust proxies need extensive white-listing features for API-based tools. Which brings us to the question: at which point are we at 0,1 Trust Proxies?

Learning by doing: Zero Trust self-hosted GitLab with Pritunl Zero

Quick notes on Letscencrypt

I will skip over the Letsencrypt setup:

root@gitlab:~# dpkg -l | grep certbot
ii  certbot                             0.22.2-1+ubuntu16.04.1+certbot+1           all          autom

root@gitlab:~# certbot -d --manual --preferred-challenges dns certonly

This way I can just set a couple of TXT records and verify that the respective Sub-Domain belongs to me. Then auto-renew the certs with certbot.

I don’t use a wildcard cert.

Quick notes on GitLab and TLS

I have the following two config variables in my gitlab.rb

external_url ''
letsencrypt['enable'] = false

The internal HTTPS endpoint for the GitLab UI is Internally I just use a self-signed cert, because this is just an IP. Due to this the certificate in this particular setup does not have to be valid. Note that I have to use port 443 / https because of the rewrite handling. This cannot be configured in this particular case.

root@gitlab:/etc/gitlab/trusted-certs/ # openssl req -x509 -newkey rsa:4096 \
 -keyout key.pem \
 -out cert.pem -days 365 -nodes

Pritunl Zero - OpenSource BeyondCorp style Zero Trust proxy

With Pritunl Zero I can just reverse-proxify the internal GitLab host, and it will not be publicly accessible. Pritunl Zero implements a transparent session layer in between the internet and the internal service {6}. Following the narrative of this post, it acts as a “perimeter”.

Configure a service

The domain reverse-proxifies the GitLab server at (uses SSL internally with a self-signed cert, therefore port 443).

Users with the Role “cloud” are allowed to authenticate.

Web-Sockets are supported.

Setup a node

At the node I assign the (Letsencrypt) SSL cert and set the X-Forwarded-For headers.

Check the user and sessions

Check the red trash bin. You can end a session. The Zero Trust proxy here allows to audit the activity.

Log Aggregation

Pritunl Zero does’t seem to log much into Syslog:

[2018-06-19 13:01:23][INFO] ▶ router: Starting web server
[2018-06-19 13:01:23][INFO] ▶ router: Starting redirect server

I don’t see access logs. Granted, I can get these from the respective backend server because I set the headers. But this is missing. A proxy like this is supposed to provide full access logs. I mean sure… that’s one way to have a GDPR / DS-GVO compliant product: no logs – no data, no data – no PII. But it’s not the way it’s intended :wink:

edit: There seems to be an option to log requests into Elasticsearch. But I cannot use Rsyslog forwarding this way, or Splunk.

I also don’t see WAF features, which is a good thing. WAF features are not necessary here. A WAF doesn’t make this more secure, unless your attackers can authenticate. But if they can, you have a different problem anyways.

There is the valid argument here, that within Zero Trust (let’s ignore MFA) frontend (the proxy) and backend (GitLab here) may use the same Central Authentication storage (Active Directory, OpenLDAP, FreeIPA, KeyCloak, …). Which means that instead of using stolen credentials once, attackers now use them twice. – And if they can steal credentials, they can probably also steal tokens.

Logon procedure

Finally, this is the session logon, which is externally available:


Workflow: SSL cert -> Backend service

I recently looked into Algo as an IPsec VPN {7} (with certificate based authentication; non-interactive). The logon procedure is much more convenient than it is with OpenVPN: it’s faster and I don’t need extra clients.

Generally I’d avoid using wildcard SSL certs for Zero Trust proxies, to be able to control which services get exported.


As you can see the Backend service (uses SSL as well) is transparently bridged out via Printunl Zero, and available via A separate login for GitLab is necessary.

In the same fashion this can work with services like

  • Splunk - for Dashboards
  • Jupyter - for ad-hoc data-science
  • Jenkins
  • Grafana
  • Spark
  • QRadar
  • pfSense Web
  • ESXi
  • Jira
  • Confluence…
  • old internal tools.

Wait… what’s the limit? Given that we will use Multi-Factor auth… on occasion. Given that for mobile usage this has convenient advantages, because the session remains in the mobile browser. And all we need is a bookmark… so that we can open our stuff by touching the icon on the home screen. Just that…

Reverse-proxify everything because we trust in Zero Trust?

  • I don’t recommend to export anything, that does not hit the relevant scores of the CIS benchmark. You can measure this with Jenkins {8}.
  • Passwords should be screened, according to NIST SP-800 {9} - bad credentials should be phased out
  • For production setups Multi Factor Auth (like with Duo) is a must-have.
  • Infrastructure UIs (like pfSense Web or the ESXi / VSphere Web) should be excluded (an IPsec VPN is convenient enough, and the mobile use-case does not apply)
  • Consider using different authentication stores (different credentials) for different layers of the Zero Trust network. But if MFA is not secure enough for you, ditch Zero Trust approaches for now.
  • my assumption is that you don’t want to host a Zero Trust without a NAT gateway (or something like pfSense). The Pritunl service stack seems to be based on Go, and it’s OpenSource. But it’s new code.

As a side note: Algo also sets up SSH users with limited shell access {10} for tunneling traffc. In many cases SSH tunnels can be lightweight alternatives for VPNs (with the Socks5 -D flag of OpenSSH clients for example).

With Printunl we can add an Authority to the SSH session initiation (and you get MFA and RBAC here).

Personally I decided that I don’t want to use this, and prefer to use JumpHosts {11} with RSA keys in their respective network segments, and with their respective LDAP OUs. – But in case you have many (limited) SSH users, Pritunl Zero’s workflow can be more convenient and allows to re-use the authentication temporarily. You should look into this, even though I didn’t (yet).


In summary: in Zero Trust we trust. It’s useful. With Pritunl Zero (or other stacks) it’s a straight forward setup. Much more convenient than fighting Nginx or Apache2 to archive the same (with SAML modules… or other painful tech).

I don*t not use this specific setup for SSH (for now), although there are some convenient advantages. But if you trust in your security program, and in the system benchmarks and hardening procedures, what’s the residual risk here? Speaking about just any service? Why harden what we have to hide? I think a security program could focus on this as an enabler: secure systems can be exported with Zero Trust, given that they meet certain criteria.

Compliance standards like ISO 27001 are risk-driven. If you make a risk-assessment, and document it, this isn’t a problem. ISO 27001 is very generic, and certain control objectives can be accounted for by the RBAC ACLs and the audit functions.

Sure, it doesn’t work for environments with cardholder data under PCI DSS. But there will always be limits. With or without compliance. PCI DSS is a Defense in Depth standard for very specific environments. And the restrictions only apply to select SAQs anyways.

Summary: 3 key-takeaways of Zero Trust

  • Zero Trust proxies need to support MFA, RBAC (with ACLs), SSO (AD, LDAP), Session Management, Logging, Web Sockets and SSL / TLS and Black / White-listing conditions (User Agents of certain API clients or Apps)
  • The concept can enable the adoption of collab tools from mobile devices; given that VPN connections over cellular networks are very unreliable.
  • This can also simplify contractor on-boarding, given that many Freelancers and Remote Workers don’t need full VPN access.

I think there is lots of benefit in Zero Trust networks, and that the risks can balanced if you use the technology in a sane way. Nowadays it’s an InfoSec challenge to find out what that sane way looks like.


{1} Goog Blog: BeyondCorp to ditch your VPN

{2} ITIL blog: Identity Lifecycle and the new perimeter

{3} RBAC in Azure

{4} Federates Identity (Wikipedia)

{5} An nginx Reverse Proxy config for the ESXi web UI that uses Web Sockets - plus an example of a WAF failing to support this

{6} Printunl Zero and a Web Service

{7} Algo VPN for IPsec

{8} Measure CIS benchmark compliance with Jenkins

{9} From the perspective of Zero Trust networks NIST SP 800 makes more sense

{10} Algo sets up SSH users for tunneling traffic - this is also a lightweight gateway approach

{11} OpenSSH and JumpHosts - the way of the corkscrew


19.06.2018 - publication, just the Web services with Printunl Zero