#077 – Introducing Data Services Manager 2.0 featuring Cormac Hogan

In this conversation, Duncan and Cormac Hogan discuss VMware’s Data Services Manager (DSM) and its role in offering data services in a full-stack private cloud. They cover topics such as the use cases for DSM, the integration with Kubernetes, the support for different databases, the automation capabilities, and the licensing model. Cormac highlights the features of DSM, including lifecycle management, backups, scaling, monitoring, and advanced settings. He also mentions the upcoming release of new features and additional data services.

Takeaways

  • Data Services Manager (DSM) is a VMware product that offers data services in a full-stack private cloud.
  • DSM integrates with Kubernetes and allows VI administrators to maintain control of vSphere resources while offering data services.
  • DSM supports databases such as Postgres and MySQL, with support for other data services like AlloyDB in tech preview.
  • DSM provides features such as lifecycle management, backups, scaling, monitoring, and advanced settings.
  • DSM is included in VMware Cloud Foundation (VCF) and support can be added through the Private AI Foundation add-on.

#076 – AI Roles Demystified: A Guide for Infrastructure Admins with Myles Gray

In this conversation, Myles Gray discusses the AI workflow and its personas, the responsibilities of data scientists and developers in deploying AI models, the role of infrastructure administrators, and the challenges of deploying models at the edge. He also explains the concept of quantization and the importance of accuracy in models. Additionally, he talks about the pipeline for deploying models and the difference between unit testing and integration testing. Unit testing is used to test the functionality of a single module or function within an application. Integration testing involves testing the interaction between different components or applications. MLflow and other tools are used to store and manage ML models. Smaller models are emerging as a solution to the resource constraints of large models. Collaboration between different personas is important for ensuring security and governance in AI projects. Data governance policies are crucial for maintaining data quality and consistency.

Takeaways

  • The AI workflow involves multiple personas, including data scientists, developers, and infrastructure administrators.
  • Data scientists play a crucial role in developing AI models, while developers are responsible for deploying the models into production.
  • Infrastructure administrators need to consider the virtualization layer and ensure efficient and easy consumption of infrastructure components.
  • Deploying AI models at the edge requires quantization to reduce model size and considerations for form factor, scale, and connectivity.
  • The pipeline for deploying models involves steps such as unit testing, scanning for vulnerabilities, building container images, and pushing to a registry.
  • Unit testing focuses on testing individual components, while integration testing ensures the compatibility and functionality of the entire system. Unit testing is used to test the functionality of a single module or function within an application.
  • Integration testing involves testing the interaction between different components or applications.
  • MLflow and other tools are used to store and manage ML models.
  • Smaller models are emerging as a solution to the resource constraints of large models.
  • Collaboration between different personas is important for ensuring security and governance in AI projects.
  • Data governance policies are crucial for maintaining data quality and consistency.

#075 – Newsflash – VMware Workstation and Fusion licensing changes! (Did I hear free?)

For this special edition of the podcast Duncan invited Michael Roy to discuss the latest VMware Workstation and VMware Fusion announcements. VMware Workstation and Fusion are desktop hypervisor products that allow users to run virtual machines on their PC or Mac. Starting today, Workstation and Fusion commercial licenses will only be available through an annual subscriptions. The price for both products is now $199 per year. The free versions of Fusion Player and Workstation Player are being discontinued, but the Pro versions will be available for free for personal use. Support for personal use products will be community-based, while commercial users will have support included in their subscription. The focus of future innovation will be on the integration between vSphere and Workstation/Fusion, providing a local virtual sandbox for learning, development, and testing.

Takeaways

  • VMware Workstation and Fusion are desktop hypervisor products for running virtual machines on PC and Mac.
  • Commercial use of Workstation and Fusion is shifting from perpetual licenses to annual subscriptions.
  • The free versions of Fusion Player and Workstation Player are being discontinued, but the Pro versions will be available for free for personal use.
  • Support for personal use products will be community-based, while commercial users will have support included in their subscription.
  • Future innovation will focus on integrating vSphere with Workstation and Fusion to provide a local virtual sandbox for learning, development, and testing.

#074 – Exploring the Cloud-Native Computing Foundation (CNCF) with William Rizzo

In this conversation, Johan van Amersfoort interviews William Rizzo about the Cloud Native Computing Foundation (CNCF) and its impact on cloud computing and modern infrastructure. They discuss the role of the CNCF in making cloud native technology ubiquitous and the support it provides to cloud native projects. They also explore the advantages of using CNCF projects for IT admins and platform engineers, including the ability to build platforms with integrated and efficient tools. They touch on the future trends and developments within the CNCF, such as platform engineering and AI/ML workloads, and how IT professionals can prepare for these changes. William also shares his personal project on platform engineering and the importance of calculating the return on investment for platform engineering practices.

Takeaways

  • The CNCF aims to make cloud native technology ubiquitous by providing support and resources to cloud native projects.
  • Using CNCF projects allows IT admins and platform engineers to build integrated and efficient platforms.
  • Platform engineering and AI/ML workloads are future trends within the CNCF.
  • Calculating the return on investment for platform engineering practices is important.
  • William Rizzo is working on a project to simplify the calculation of return on investment for platform engineering practices.
Chapters

00:00 – Introduction and William Rizzo’s IT Career

03:09 – Overview of the Cloud Native Computing Foundation

06:39 – Other Core Projects and Technologies Hosted by the CNCF

10:17 – Impact of CNCF Projects and Principles on Cloud Computing

13:31 – Advantages of Using CNCF Projects for IT Admins and Platform Engineers

14:45 – Interoperability and Standardization across Cloud Platforms

23:45 – Resources for Learning about Cloud Native Technologies

28:17 – Future Trends and Developments within the CNCF

38:49 – William Rizzo’s Personal Project on Platform Engineering

46:04 – Calculating the Return on Investment for Platform Engineering Practices

49:28 – Closing Remarks

More about William: https://www.linkedin.com/in/william-rizzo/
The link to the CNCF website: https://www.cncf.io/

#073 – What was announced for vSAN in the last month(s)? Featuring Pete Koehler!

In episode 073 Duncan and Pete discuss various updates and changes related to vSAN, including ReadyNode configurations, licensing, vSAN Max, capacity reporting, and compression ratios. They highlight the improvements in compression ratios with vSAN ESA, which can result in significant space efficiency gains. They also discuss the use cases for vSAN Max and vSAN HCI, as well as the flexibility in making changes to ReadyNode configurations. Overall, they emphasize the ongoing development and exciting future of vSAN and VMware Cloud Foundation.

Takeaways

  • vSAN ESA offers improved compression ratios, with an average of 1.5x and some customers achieving 1.7x or better.
  • vSAN Max is a centralized shared storage solution for vSphere clusters, providing storage services to multiple vSphere clusters.
  • Customers can choose between vSAN Max and vSAN HCI based on their needs, such as independent scaling of storage and compute, separate lifecycle management, extending the life of existing vSphere clusters, or specific application requirements.
  • Changes in ReadyNode configurations for vSAN Max have reduced the minimum number of hosts required and lowered the hardware requirements, making it more accessible for smaller enterprises.
  • Capacity reporting in vSAN has been improved with the introduction of L0FS overhead, providing more accurate information on capacity usage.
  • vSAN ESA’s improved compression ratios, combined with RAID 5 or RAID 6 erasure coding, can result in significant space efficiency gains compared to the original storage architecture.
  • Ongoing development and updates are expected in vSAN and VMware Cloud Foundation, with exciting new capabilities on the horizon.

Links