Episode 46: Multi-layer authz? Yes please!

Q: Where should you enforce your authorization policy?
A: Everywhere you can!
There are four common scenarios and enforcement points for a defense-in-depth strategy:

⚡ during the authentication ceremony
⚡ in the resource server
⚡ at the API gateway
⚡ in service-to-service communication

Episode 45: Intro to the Cedarling

Cedar is a policy syntax invented by Amazon. It’s used by the AWS Verified Permissions, Authz-as-a-Service offering. Gluu is working on a new product at the Janssen Project called the “Cedarling”–which leverages the Cedar policy syntax and Amazon’s open source Cedar Rust engine. The Cedarling can run anywhere–as a local agent in the browser, embedded in a mobile application, or as a cloud service. It needs no data, because it trusts the JWT tokens that are input to the request by the application. Beyond policy evaluation, the Cedarlng agent has two other capabilities: JWT validation and audit logging. In this episode, Mike will present Gluu’s current progress on the Cedarling and show a demo of the Cedarling in action!

Episode 44: Securing identity and context in microservices

Defending against privileged user compromise and software supply chain attacks requires newer standards that can reduce the trust in non-human (or machine) identities used by services to communicate with each other. Transaction tokens is a new proposed standard in the IETF which can effectively defend against these attacks. Learn all about it in this episode with Atul Tulshibagwale, CTO of SGNL, the inventor of CAEP and an Okta Identity 25 Listee.

Episode 43: Intersection of IAM with cloud

Managing IAM for your own users and employees is hard enough, and with the adoption of cloud (including SaaS) it’s only getting harder. Especially when you consider the addition of 3rd parties into the mix, such as contractors, BPOs, MSPs and other kinds of vendors. In this podcast we’ll discuss the intersection of IAM with cloud (with a particular discussion of AWS cross-account access and the Snowflake incident) and with Third Party Cyber Risk Management in general.

Episode 41: National ID Challenges

What are the priorities and tradeoffs of certain approaches to building a national identity infrastructure? How can you build a system that enables people to assert their identity and claims, and also protects their privacy ?

What are the most pressing use cases? Voting? Healthcare? Opening bank account ? Other private sector RPs?

How to balance the tradeoffs of privacy, fraud reduction, and poverty reduction presented by a system like Aadhaar.

Do Verifiable Credentials offer a “leapfrog opportunity”?

What to put on the blockchain (if anything) ?

What did Singpass get right in Singapore ?

Whether to engage with the 50-in-5 initiative, DPGA, or Govstack

Sovereignty vs availability ?

Accessibility v. progress ?

Episode 40: You got the JWT… now what?

Once you have obtained a JSON Web Token (JWT), the next steps involve understanding, securely storing, and effectively using it for authentication and communication within your web application. A JWT comprises three parts: the Header, Payload, and Signature. It is crucial to store the JWT securely on the client-side, often in local storage or an HTTP-only cookie, to prevent cross-site scripting (XSS) attacks. For API requests, the JWT should be included in the Authorization header using the Bearer schema. On the server-side, you must verify the token’s signature, check its expiration, and validate its claims to ensure its authenticity and relevance. Handling token expiration through refresh tokens, decoding the JWT to access user information, and protecting your endpoints with role-based access control (RBAC) are essential steps to maintain security. Additionally, monitoring and logging JWT usage are vital for auditing and troubleshooting. Proper handling of JWTs ensures the security and efficiency of your authentication processes, safeguarding your application against potential vulnerabilities.

Episode 39: Blockchain vs. The Right To Be Forgotten – one Solution

1. Can a blockchain be made to support the many new regulations that require the “Right of Erasure”?
2. If so, how can it then remain an immutable source of truth?
3. Does a solution require a specialized blockchain or can it be applied to existing blockchains?
4. What are the issues that arise from incorporating the solution?

Episode 38: Immortal passwords versus vulnerable humans

Immortal Passwords refers to the concept of password practices and protocols that are designed to be incredibly secure and resistant to various forms of cyber-attacks, essentially making them ‘immortal’ in the face of evolving threats. These passwords typically adhere to stringent security standards, including long character lengths, a mix of symbols, numbers, and letters, and regular updates. Additionally, they are often managed through sophisticated password management systems or algorithms that can generate and store complex passwords securely.

Vulnerable Humans, on the other hand, highlight the inherent weaknesses in human behaviors and practices when it comes to password security. Despite the availability of strong password guidelines, many individuals still use weak passwords, reuse passwords across multiple sites, or fail to update them regularly. This makes them susceptible to common cyber threats such as phishing, brute force attacks, and credential stuffing.

Episode 37: The Rise of Browser Identity APIs

In the last few years, there have been a number of new browser APIs proposed and implemented that assist developers to authenticate people or establish identity. This talk will discuss a few of these, like WebAuthn, WebOTP, FedCM, DBSC and the Digital Credentials API.

Episode 36: Deepfakes II: BioID’s Combat Strategy

The applications for Deepfake Detection are numerous, especially as generative AI has advanced significantly. The question arises: can online media and identity verification processes still be deemed reliable? Discover methods to protect your identity and systems against impersonation and learn how to identify deepfakes on your own – or is that even possible?