#074 – Exploring the Cloud-Native Computing Foundation (CNCF) with William Rizzo

In this conversation, Johan van Amersfoort interviews William Rizzo about the Cloud Native Computing Foundation (CNCF) and its impact on cloud computing and modern infrastructure. They discuss the role of the CNCF in making cloud native technology ubiquitous and the support it provides to cloud native projects. They also explore the advantages of using CNCF projects for IT admins and platform engineers, including the ability to build platforms with integrated and efficient tools. They touch on the future trends and developments within the CNCF, such as platform engineering and AI/ML workloads, and how IT professionals can prepare for these changes. William also shares his personal project on platform engineering and the importance of calculating the return on investment for platform engineering practices.

Takeaways

  • The CNCF aims to make cloud native technology ubiquitous by providing support and resources to cloud native projects.
  • Using CNCF projects allows IT admins and platform engineers to build integrated and efficient platforms.
  • Platform engineering and AI/ML workloads are future trends within the CNCF.
  • Calculating the return on investment for platform engineering practices is important.
  • William Rizzo is working on a project to simplify the calculation of return on investment for platform engineering practices.
Chapters

00:00 – Introduction and William Rizzo’s IT Career

03:09 – Overview of the Cloud Native Computing Foundation

06:39 – Other Core Projects and Technologies Hosted by the CNCF

10:17 – Impact of CNCF Projects and Principles on Cloud Computing

13:31 – Advantages of Using CNCF Projects for IT Admins and Platform Engineers

14:45 – Interoperability and Standardization across Cloud Platforms

23:45 – Resources for Learning about Cloud Native Technologies

28:17 – Future Trends and Developments within the CNCF

38:49 – William Rizzo’s Personal Project on Platform Engineering

46:04 – Calculating the Return on Investment for Platform Engineering Practices

49:28 – Closing Remarks

More about William: https://www.linkedin.com/in/william-rizzo/
The link to the CNCF website: https://www.cncf.io/

#073 – What was announced for vSAN in the last month(s)? Featuring Pete Koehler!

In episode 073 Duncan and Pete discuss various updates and changes related to vSAN, including ReadyNode configurations, licensing, vSAN Max, capacity reporting, and compression ratios. They highlight the improvements in compression ratios with vSAN ESA, which can result in significant space efficiency gains. They also discuss the use cases for vSAN Max and vSAN HCI, as well as the flexibility in making changes to ReadyNode configurations. Overall, they emphasize the ongoing development and exciting future of vSAN and VMware Cloud Foundation.

Takeaways

  • vSAN ESA offers improved compression ratios, with an average of 1.5x and some customers achieving 1.7x or better.
  • vSAN Max is a centralized shared storage solution for vSphere clusters, providing storage services to multiple vSphere clusters.
  • Customers can choose between vSAN Max and vSAN HCI based on their needs, such as independent scaling of storage and compute, separate lifecycle management, extending the life of existing vSphere clusters, or specific application requirements.
  • Changes in ReadyNode configurations for vSAN Max have reduced the minimum number of hosts required and lowered the hardware requirements, making it more accessible for smaller enterprises.
  • Capacity reporting in vSAN has been improved with the introduction of L0FS overhead, providing more accurate information on capacity usage.
  • vSAN ESA’s improved compression ratios, combined with RAID 5 or RAID 6 erasure coding, can result in significant space efficiency gains compared to the original storage architecture.
  • Ongoing development and updates are expected in vSAN and VMware Cloud Foundation, with exciting new capabilities on the horizon.

Links

#072 – Chris Gully and the rise of Small Language Models

Chris Gully discusses his current role in the new Broadcom organization and highlights of his career. He emphasizes the importance of staying relevant in the technology industry and the value of working with cool and smart people. The conversation then shifts to the topic of small language models (SLMs) and their role in the landscape of gen AI applications. Gully explains that SLMs offer a more progressive approach to working with large language models (LLMs) and enable more efficient and scalable deployments. The discussion also touches on the components of gen AI applications, the need for right-sizing models, and the challenges of scalability and efficiency. Gully highlights the importance of data and its role in driving business outcomes through AI. The conversation concludes with a discussion on the benefits and limitations of fine-tuning LLMs and the potential future of SLMs. The conversation explores the concept of SLMs (Small Language Models) and their role in AI development. It discusses the advantages of SLMs over LLMs (Large Language Models) regarding efficiency, optimization, and governance. The conversation also touches on the challenges of infrastructure management and resource allocation in AI deployments. It highlights the importance of right-sizing workloads, distributing workloads across data centers, and maximizing resource utilization. The conversation concludes with a discussion on the future trends in machine learning and AI, including advancements in math and the need for accessible and efficient technology.

Links

Takeaways
Staying relevant in the technology industry is crucial for career success.

  • Small language models (SLMs) offer a more efficient and scalable approach to working with large language models (LLMs).
  • Data is the most important and untapped asset for organizations, and leveraging it through AI can drive business outcomes.
  • Scalability and efficiency are key challenges in deploying gen AI applications.
  • Fine-tuning LLMs can enhance their precision and reduce the need for extensive training.
  • The future of SLMs may involve dynamic training and efficient distribution to support evolving business needs. SLMs offer advantages in terms of efficiency, optimization, and governance compared to LLMs.
  • Infrastructure management and resource allocation are crucial in AI deployments.
  • Right-sizing workloads and maximizing resource utilization are key considerations.
  • Future trends in machine learning and AI include advancements in math and the need for accessible and efficient technology.

#071 – Developer Experience & Spring with DaShaun Carter

In this new episode of the Unexplored Territory Podcast, DaShaun Carter, a Spring Developer Advocate at VMware Tanzu and Broadcom, discusses his career highlights, his home lab setup, and his passion for Spring. He explains the concept of developer experience and how Spring and Tanzu contribute to it. DaShaun also highlights the innovations in Spring, such as AOT processing and native images, and their impact on use cases. He discusses the relationship between the open source aspect of Spring and the closed source solutions in the Tanzu portfolio. Finally, he explores the importance of developer experience in platform engineering. In this conversation, DaShaun and Johan discuss the importance of collaboration between developers and platform engineers, the value of Spring for platform engineers, the role of AI in developer experience and Spring, interesting topics for the VMware Explore conference, and where to learn more about Spring and open source.

Takeaways

  • Spring and Tanzu provide an easy and efficient developer experience, allowing developers to focus on solving problems and delivering software.
  • Innovations in Spring, such as AOT processing and native images, enable the deployment of enterprise-grade workloads on low-cost devices and at scale.
  • The open source aspect of Spring allows flexibility and choice for customers, while the commercial solutions in the Tanzu portfolio provide additional support and 24/7 access to experts.
  • Developer experience plays a crucial role in platform engineering, as it attracts developers to the platform and enables efficient onboarding and deployment processes. Collaboration between developers and platform engineers is crucial for successful software delivery.
  • Platform teams should build relationships with developers and continuously iterate on meeting their needs.
  • Spring provides enterprise-grade, production-ready tools and frameworks that make the life of platform engineers easier.
  • AI is becoming increasingly important in the developer experience, and Spring AI provides an abstraction layer for consuming AI models.
  • Interesting topics for the VMware Explorer conference include overcoming obstacles in software delivery, cost-saving solutions, and success stories.
  • To learn more about Spring and open source, connect with DaShaun on X, YouTube, and LinkedIn, and check out the Spring Office Hours show.

#070 – vSAN performance with Patryk Wolsza (Intel)

In this conversation, Duncan and Patryk discuss vSAN performance, specifically focusing on vSAN ESA. Patryk shares his findings from comparing Intel and AMD CPUs, highlighting that vSAN ESA performs better on Intel CPUs in almost every scenario. They also discuss the cost and price point considerations when choosing between VMware vSAN OSA and vSAN ESA. Patryk explains the configuration and testing process for OSA and ESA, as well as the performance impact of RDMA and 100 Gig NICs. The conversation concludes with recommendations for customers, emphasizing the importance of trying new technologies and exploring the benefits they can offer.

Takeaways

  • vSAN ESA performs better on Intel CPUs compared to AMD CPUs in various scenarios.
  • Consider the cost and price point when choosing between OSA and ESA.
  • RDMA and 100 Gig NICs can significantly improve vSAN performance, reducing latency and increasing throughput.
  • It is recommended to try new technologies and explore their benefits to optimize vSAN performance.

Some links to topics discussed: