Проксмокс, он же прокс, он же Proxmox VE, или Proxmox Virtual Environment - в очередной раз обновился. Платформа виртуализации, плавно, в отличии от конкурентов развивающаяся уже 16 с половиной лет (!), вышла, в этот раз, с небольшим, но интересным списком обновлений.

Эта версия построена на базе Debian 12.8 (Bookworm), но использует ядро Linux 6.8.12-4 по умолчанию, с возможностью выбора ядра 6.11. Программное обеспечение включает обновления таких технологий, как QEMU 9.0.2, LXC 6.0.0 и ZFS 2.2.6 (с совместимостью для ядра 6.11).

Улучшения в Proxmox Virtual Environment 8.3

  • Интеграция SDN и брандмауэра: SDN позволяет создавать виртуальные сети (VNet-ы) и управлять ими через веб-интерфейс Proxmox VE. Теперь SDN интегрирован с брандмауэром, автоматически создавая IP-set-ы для VNet-ов и виртуальных гостей. Это упрощает создание и управление правилами брандмауэра. Новый брандмауэр на основе nftables может фильтровать сетевой трафик как на уровне хоста, так и на уровне VNet.

  • Webhook-и для системы уведомлений: Система уведомлений Proxmox позволяет настраивать HTTP-запросы для различных событий, таких как обновления системы или проблемы с узлами кластера. Это позволяет интегрироваться с сервисами, поддерживающими webhook-и.

  • Новая функция «Tag View» для дерева ресурсов: Позволяет пользователям быстро видеть виртуальных гостей, сгруппированных по тегам.

  • Поддержка Ceph Squid (technology preview): Добавлена поддержка Ceph Squid 19.2.0, а также продолжается поддержка Ceph Reef 18.2.4 и Ceph Quincy 17.2.7. Пользователи могут выбрать предпочтительную версию Ceph при установке.

  • Быстрее резервные копии контейнеров: При резервном копировании контейнеров на Proxmox Backup Server теперь можно пропускать файлы, которые не изменились с момента последнего бэкапа, что ускоряет процесс.

  • Миграция с других гипервизоров: Упрощен импорт виртуальных машин из форматов OVF и OVA через веб-интерфейс Proxmox VE. Также появился мастер импорта для миграции виртуальных машин с других гипервизоров, таких как VMware.

Где взять?

Proxmox Virtual Environment 8.3 доступен для скачивания на https://www.proxmox.com/downloads. ISO-образ содержит полный набор функций и может быть установлен на "голое железо". Обновления дистрибутива с более ранних версий Proxmox VE возможны через apt. Возможно установить Proxmox VE 8.3 в Debian.

Лицензия

Proxmox Virtual Environment является бесплатным программным обеспечением с открытым исходным кодом, опубликованным под GNU Affero General Public License, v3.

Изменения наглядно

Заметки по выпуску

Released 21 November 2024:

  • Based on Debian Bookworm (12.8)

  • Latest 6.8.12-4 Kernel as new stable default

  • Newer 6.11 Kernel as opt-in

  • QEMU 9.0.2

  • LXC: 6.0.0

  • ZFS: 2.2.6 (with compatibility patches for Kernel 6.11)

  • Ceph Reef 18.2.4

  • Ceph Quincy 17.2.7

  • New Ceph Squid 19.2.0 available as technology preview

Highlights

  • New "Tag View" for a quick and customizable overview of virtual guests.Users can already categorize their virtual guests using custom tags.The new "Tag View" view type for the resource tree shows virtual guests grouped according to their tags.This allows for a quick overview of the categories of virtual guests in the cluster.

  • Tighter integration of the Software-Defined Networking (SDN) stack with the firewall.Proxmox VE SDN now generates IP sets for VNets and virtual guests managed by the PVE IP address management plugin.These IP sets can be referenced in firewall rules, making the rules simpler and easier to maintain.In addition, the opt-in firewall based on nftables now allows to filter forwarded traffic, both on the host and VNet level.For example, this can be used for restricting SNAT traffic or traffic flowing from one Simple Zone to another.

  • More streamlined guest import from files in Open Virtualization Format (OVF) and Open Virtualization Appliances (OVA).OVF and OVA files can be directly imported from file-based storages in the GUI.This makes it easier to import virtual appliances and simplifies migration from hypervisors supporting OVF/OVA export.Users can upload OVA files from their local machine or download them from a URL.The improved OVF/OVA importer now also recognizes the guest OS type, NICs, boot order, and boot type.

  • Webhook target for the notification system.The new webhook notification target allows notification events to trigger HTTP requests.Request headers and body can be customized and can contain notification metadata.This allows users to push notifications to any target that supports webhooks.

  • New change detection modes for speeding up container backups to Proxmox Backup Server.Metadata and data of backup snapshots are now stored in two separate archives.Optionally, files that have not changed since the previous backup snapshot can be identified using the previous backup snapshot's metadata archive.Processing of unchanged files is avoided when possible, which can lead to significant reduction in backup runtime.

  • Ceph Squid 19.2.0 is available as a technology preview.

  • Seamless upgrade from Proxmox VE 7.4, see Upgrade from 7 to 8

Changelog Overview

Enhancements in the web interface (GUI)

  • Introduce a tag view for the resource tree.Users can already assign tags to virtual guests to categorize them according to custom criteria.The new tag view shows virtual guests grouped according to their assigned tags.This allows users to get a quick and structured overview over the virtual guests in their cluster.

  • Confirmation dialogs for guest actions now also display the guest name (issue 5787).

  • Allow unprivileged users to create and manage their API tokens via the GUI (issue 5722).The backend already allowed this, but the functionality was not available in the GUI.

  • Unplugging disks from a running VM is now done asynchronously to avoid running into the HTTP timeout of 30 seconds.

  • Increase the minimum length requirement for new passwords to 8 characters.

  • Nodes in maintenance mode are now displayed with a wrench icon in the resource tree.

  • Show only installed services in the node's system panel by default, but optionally allow to show all services (issue 5611).

  • Right-align numbers in the S.M.A.R.T. values table (issue 5831).

  • Update the noVNC guest viewer to upstream version 1.5.0.

  • Fix an issue where using the noVNC console would cause the browser to attempt storing a VNC password (issue 5639).

  • Fix an issue where notes for nodes and virtual guests did not preserve percent encodings (issue 5486).

  • Fix an issue where clicking on an external link to the GUI would display a login screen, even if the current session was still valid.

  • Fix an issue with reset behavior when editing a Proxmox Backup Server storage.

  • Fix inconsistent reporting of host CPU usage in node selectors.

  • Fix an issue where editing the PCI mappings for any but the first node would fail.

  • Fix an issue where the Datacenter summary would miscalculate the storage size if a cluster node is offline.

  • Fix an issue where the backup job details would misreport the backup mode as snapshot mode instead of suspend mode.

  • Fix an issue where the permission check for adding the TPM state was overly strict.

  • Fix a regression which broke the mobile UI (issue 5734).

  • Fix incorrect online help links (issue 5632).

  • Disable the button for regenerating the cloud-init image if the user lacks the necessary privileges. This better aligns the GUI with the privilege check in the backend.

  • Improved translations, among others:

    • Bulgarian (NEW!)

    • French

    • German

    • Russian

    • Spanish

    • Traditional Chinese

Virtual machines (KVM/QEMU)

  • New QEMU version 9.0.2Improve error reporting and error handling with fleecing images.Fix crashes when creating snapshots without state of guests with VirtIO Block devices.Fix a compiler warning by dropping unused code (issue 4726).See the upstream changelog for further details.

  • Improved support for importing virtual machine appliances from OVF/OVA files.OVF/OVA files can now be imported directly via the GUI from file-based storages.This can be enabled by selecting the import content type for that storage.OVA files can also be uploaded from the local machine or downloaded from a URL.The OVF importer now also tries to initialize the VM with the correct OS type, NICs, boot order, and boot type.Note that the Open Virtualization Format is not always strictly adhered to and allows for vendor extensions. In addition, not all exporters or image creators strictly follow the base standard.The Proxmox VE integration tries to handle common deviations when parsing these files, but it is expected that some bugs will still occur.Please report these to us, ideally with a link to the OVA, so we can try to add quirks for more vendors.

  • Make NVIDIA vGPU passthrough available under kernel 6.8 by adapting to changes in the NVIDIA vGPU driver.

  • Initial support for AMD Secure Encrypted Virtualization (SEV).On supported platforms and guest operating systems, SEV can encrypt guest memory.Some features like live migration, snapshots with RAM, and PCI passthrough are unsupported or cannot be done securely.Initial support for SEV-Encrypted State (AMD-SEV-ES), which additionally encrypts CPU state, is experimental.

  • Increase compatibility with Cloudbase-Init, a cloud-init re-implementation for Windows (issue 4493).

  • When adding or editing a PCI resource mapping that uses mediated devices in the GUI, show available mediated device types of all available PCI devices, instead of only the first one.

  • Provide more detailed error messages for some types of migration, storage move, live-restore, and live-import failures.

  • Fix an issue where backing up a VM template would fail due to insufficient resources on the host (issue 3352).

  • The resource tree now shows tooltips for entries where useful information is available, for example node entries.

  • The selector for security groups now shows a tooltip for comments that are too long to fit within the column width (issue 5879).

  • Increase the timeouts for attaching or detaching new drives to QEMU (issue 5440).This fixes an issue where detaching a fleecing image after a backup could fail on a busy host.

  • Increase the timeouts when executing QEMU human monitor commands via the API and CLI.

  • Show CPU affinity in the Hardware panel (issue 5302).

  • Clarify description of migration downtime.

  • Print an informative error message if local resources prevent VM live-migration, snapshot with RAM or hibernation.

  • Fix an issue where vCPUs would be throttled after taking a snapshot.

  • Fix an issue where intermediate state and volumes would not be completely cleaned up if a suspend operation fails.

  • Improvements to TPM state disk handling:

    • Correct schema to reflect that the default TPM state is 1.2, not 2.0.

    • Forbid changing the version of an existing TPM state, as this will lead to VM start failure.

    • Avoid warnings about undefined value when TPM version is not explicitly set (issue 5562).

  • Avoid warning about uninitialized value when cloning cloud-init disk (issue 5572).

  • Clarify in the schema that VGA type cirrus is not recommended.

  • Fix an issue where only the root user could add a SPICE USB port (issue 5574).

  • Fix an issue where changes to the CPU limit or CPU affinity of a running VM would be reverted after a systemd daemon reload (issue 5528).

  • Fix an issue where the link-down setting would not be honored when hot-plugging a virtual NIC (issue 5619).

  • Avoid an issue where a VM could not be resumed on the target node automatically after live-migration, and would need to be resumed manually.

  • Avoid wrongly logging success for some kinds of failures during live migration.

  • Fix an issue where live migration could crash the VM if the local VM disk was under heavy load.

  • Fix an issue where starting a remote migration via the API would fail.

  • Fix compiler warning when building qmeventd with newer compilers (issue 5714).

  • Fix some typos in user-visible messages.

  • Log process ID of newly started VMs to the syslog to facilitate troubleshooting.

Containers (LXC)

  • Add support for containers running Ubuntu 24.04 Noble and Ubuntu 24.10 Oracular.

  • Relax version checks for Fedora containers by only requiring at least Fedora 22, instead of also checking for a maximum supported version.This adds support for containers running Fedora 41.

  • Add support for containers running OpenSUSE Tumbleweed Slowroll (issue 5762).

  • Add support for containers running openEuler (issue 5720).

  • Allow enabling discard for root filesystem and mount points (issue 5761).

  • Add an option to pass through devices in read-only mode.

  • Avoid committing an invalid container configuration if network interface hotplug fails.

  • Fix an issue where the network configuration would not take effect for containers running Ubuntu 23.04 and later.

  • Fix an issue where Alma Linux, Rocky Linux, and CentOS containers would lose assigned IPv6 addresses (issue 5742).

  • Clarify reporting of percentages in the output of pct df (issue 5414).

  • Fix a regression where starting containers directly after creation could fail.

  • Add missing interfaces endpoint to the API index.

  • Fix an issue where the API endpoint for querying network interfaces of a running container would only return a result on the node on which the container is running (issue 5674).

  • Fix a regression where disk quotas would not be applied (issue 5666).

  • Templates:

    • Provide Ubuntu 24.04 template.

    • Provide Ubuntu 24.10 template.

    • Provide Fedora 40 template.

    • Provide Fedora 41 template.

    • Provide openEuler 24.09 template.

    • Provide OpenSUSE 15.6 template.

    • Provide AlpineLinux 3.20 template.

    • Provide Devuan Daedalus 5.0 template.

    • Update Debian Bookworm template to 12.7.

    • Update ArchLinux template to 20240911.

    • Update AlmaLinux 9 template.

    • Update Rocky Linux 9 template.

    • Update CentOS Stream 9 template.

General improvements for virtual guests

  • Add missing descriptions for some properties of the guest status API endpoints.

HA Manager

  • Change the active Cluster Resource Manager (CRM) state to idle after ~15 minutes of no service being configured (issue 5243).This is mostly cosmetic, as the CRM never needed to trigger self-fencing on quorum loss anyway.All state under the control of the CRM is managed by pmxcfs, which is already protected by quorum and cluster synchronization.

  • CRM: get active if it appears that no CRM is already active and there are either pending CRM commands or there are nodes that should come out of maintenance mode.This has no direct impact on most of the HA stack as the CRM always switches to active when a service is added anyway.However, the state of maintenance mode is displayed in the GUI and not clearing it in a timely manner could lead to user confusion.

Improved management for Proxmox VE clusters

  • The notification system now supports generic webhook targets.The webhook target allows notification events to trigger arbitrary HTTP POST/PUT/GET requests.Users can customize HTTP request headers and body using templates.In case the HTTP request should include secret values, these values can be stored in a protected configuration file that is only readable by root.

  • Improvements to the notification system:

    • When creating or editing a notification matcher in the GUI, show available notification metadata fields and possible values.

    • Add the job-id notification metadata field to backup notification events.

    • Allow more fine-grained matching for events about failed replication jobs by adding the job-id and hostname metadata fields.

    • The hostname metadata field for backup notification events does not include the domain anymore.

  • Reduce amplification when writing to the cluster filesystem (pmxcfs), by adapting the fuse setup and using a lower-level write method (issue 5728).

  • Use the correct ssh command arguments for communication between nodes (issue 5461).Surfaced with the improvements from issue 4886.

  • Fix an issue where a long-running HTTP request handler could cause other HTTP requests to fail with HTTP 599 "Too many redirections" (issue 5391).

  • Add a new /cluster/metrics/export API endpoint for retrieving status data of various subsystems.This allows to implement pull-style metric collection systems.The data is retrieved from a node-local shared cache.

Backup/Restore

  • New change detection modes Data and Metadata for container backups to Proxmox Backup Server (issue 3174).In both new modes, metadata and data of file-based backup snapshots are stored separately.This removes the necessity for a dedicated catalog file but still allows for efficient metadata lookups.In Metadata mode, files that have not changed since the previous backup snapshot are identified using the previous backup snapshot's metadata archive.Processing of unchanged files is avoided when possible, which can lead to significant reduction in backup runtime.The change detection mode can be adjusted in the Advanced Options of container backup jobs.

  • Improvements to fleecing backups:

    • Fix an issue where fleecing backups would fail for slightly slow storages, the reason being an incorrect timeout (issue 5409).

    • Improve error reporting when a copy-before-write operation fails.

    • Fix an issue where guest IO could become stuck after a failed fleecing backup.

  • Allow to set a custom job ID for backup jobs. Setting this option is currently restricted to root@pam.

  • Improvements to file restore:

    • Mount NTFS filesystems with UTF-8 charset. This fixes an issue where files with non-ASCII names would not be visible during file restore (issue 5465).

    • Log errors when a file cannot be accessed to facilitate troubleshooting.

  • Warn during container backup if tar is executed with an exclusion pattern ending in a slash. tar will match neither files nor directories with that pattern, which may be unexpected.

  • Improve error reporting during container backups by logging errors by rsync.

  • Improvements to proxmox-backup-client, which is used for container backups to Proxmox Backup Server.

    • Periodically log the current backup progress (issue 5560).

    • Prefer to store temporary files in the XDG Cache directory (~/.cache by default) instead of /tmp (issue 3699).

    • Fix an issue where restoring backups as an unprivileged user could fail due to an internal file owned by root.

  • If a VM backup detects a running backup job, cancel it before proceeding. This can happen after a hard failure.

  • Disks newly added to templates are now directly converted to base volumes (issue 5301).

  • The qm disk import now supports an option to directly attach the imported disk.

  • Clarify error message when encountering a timeout during restore from a VMA backup file.

  • Increase timeout for reading the VMA header to avoid failures when IO pressure is high.

  • Report the correct unit Kibibyte instead of Kilobyte for the bandwidth limit in the backup logs.

  • Fix a regression where backup jobs converted from vzdump.cron would fail to start (issue 5731).

Storage

  • The ISO upload functionality now supports images compressed with bzip2 (issue 5267).

  • Improvements to the BTRFS storage plugin:

    • Add support for renaming volumes (issue 4272).

    • The BTRFS plugin now prints the executed command on failure to facilitate troubleshooting.

  • Improvements to the iSCSI storage plugin:

    • Shorten the Open-iSCSI login timeouts. Long timeouts could cause issues on setups where not all discovered portals are reachable.

    • Fix an issue where only the first defined iSCSI storage would become active on all cluster nodes.

    • Fix a security issue where an authenticated user with necessary privileges can trick the system into accessing arbitrary host block devices, including passing them through into guests as a volume.

    The necessary privileges are Datastore.Allocate privilege on an iSCSI storage and VM.Config.Disk privilege on a VM.

    See PSA-2024-00010-1 for details.

  • Fix an issue where VMDKs could not be imported if their filename contains whitespace.

  • Improvements to the ESXi importer:

    • Fix a short-lived regression that would cause nodes to go grey if an ESXi storage is defined.

    • Add support for older ESXi configuration files that use the all-lowercase filename property (issue 5587).

  • Fix an issue where a failed SSH connection during ZFS replication would result in confusing errors.

  • Avoid misleading error message if a call to qemu-img gives no output, for example due to a timeout.

  • Implement additional safeguards when importing disks or ISO images from untrusted sources.

  • Properly catch errors when unlinking the temporary file created when uploading ISOs or templates.

Ceph

  • New Ceph 19.2 Squid available as technology preview.See the upstream changelog for more details.

  • When creating a Ceph metadata server (MDS), its ID can now be freely chosen. Previously, the ID always started with the hostname, which could cause problems because hostnames can start with numbers, but MDS IDs cannot (issue 5570).

  • Fix an issue where editing pool properties in the GUI could fail if the nosizechange property is set (issue 5010).

  • Prompt user for confirmation when installing a Ceph version that is currently considered a tech preview for Proxmox VE.

Access control

  • Enforce a minimum password length of at least 8 characters for new users and when updating the password of existing users.

  • Allow users without the Sys.Audit permission to see their own permissions on the API endpoint /access/permissions.This has already worked before, but when a user without the Sys.Audit permission specifically passed their own userid, the access was denied even though the user should have access to this information.

  • Fix a security issue where authenticated attackers with Sys.Audit or VM.Monitor privileges could download arbitrary host files via the API.See PSA-2024-00009-1 for details.

  • Allow API tokens without the Sys.Audit permission to see their own permissions on the API endpoint /access/permissions.

  • Allow users with the Permission.Modify permission on a path to update arbitrary permissions for that path, even when permission propagation is disabled.

Firewall & Software Defined Networking

  • Integrate SDN stack and firewall more tightly by automatically generating IP sets.For every VNet, the SDN stack generates several IP sets, for example containing the IP ranges of its subnets or DHCP ranges.Also, the SDN stack generates an IP set for each guest that has entries in the PVE IPAM.Firewall rules can then reference the generated IP sets instead of hard-coded IP addresses and ranges.This makes firewall configuration less error-prone and simplifies maintenance.IP sets and rules are automatically updated on VNet or IPAM changes.

  • Firewall support for forwarded traffic on the host level and on the VNet level.Allow to define firewall rules with a new forward direction.This allows hosts that act as a router to filter traffic passing through them.Filtering on the VNet level allows to restrict guest-to-guest and guest-to-host traffic within a VNet.The forward direction is supported only by the new opt-in firewall based on nftables introduced in Proxmox VE 8.2.

  • Support creating VNets with isolation.By setting the advanced option Isolate Ports on a VNet, each guest interface connected to it will have the isolated flag set, preventing it from sending traffic to other guest-interfaces.Traffic to the bridge port itself, and thus also the outside world still goes through.Port isolation is local to each host. The VNet firewall can be used to further isolate traffic in the VNet across nodes.

  • Show a confirmation dialog when applying pending SDN changes in the GUI (issue 5810).

  • Fix an issue where updating a virtual NIC would produce duplicate IPAM entries.

  • Fix an issue where editing a custom IPAM mapping in the GUI would error out.

  • When editing a VNet in the GUI, hide fields that are irrelevant for the current zone type.

  • Correctly supply a custom MTU setting for VLAN zones on non-VLAN aware host bridges (issue 5324).

  • Keep the proxmox-firewall daemon dormant, unless the new opt-in nftables-based is activated, to prevent logging spurious parsing errors.

  • Align feature-set and naming conventions items between the new nftables based firewall, and the legacy iptables based one for feature-parity:

    • Add support for REJECT rules.

    • Align parsing of firewall objects between both firewall implementations (issue 5410).

    • Add a SPICE macro.

    • Add support for icmp-type any.

    • Use the appropriate ICMPv6 type for rejecting traffic.

    • Fix handling ARP traffic when using the default block or reject policy.

    • Add conntrack rules to the output chain, to prevent wrongly unmarked packets.

    • Allow all ICMP and ICMPv6 types necessary for a proper functioning of the network according to RFC 4890.

    • Gracefully handle switching back to the iptables based firewall.

    • Fix handling ARP traffic for VLANs.

Improved management of Proxmox VE nodes

  • Improvements to Secure Boot management.With the need to update the revocation database embedded in the shim boot loader, some edge-cases were discovered and improved in the proxmox-secure-boot metapackage.Ship an apt pinning snippet to ensure that Proxmox provided packages are installed, even if Debian temporary ships a higher version.Relax the dependency on the grub2 version to also allow the previous one, mostly to prevent accidental removal of the meta-package in edge-cases.

  • Improvements to Proxmox Offline Mirror.Support repositories, that provide a complete GPG keyring instead of a certificate (e.g. Mellanox OFED repository).Remove empty directories being left behind after syncing a mirror with removed snapshots to a medium. The large number of empty directories could lead to excessive runtimes on medium sync.Fix a typo in the documentation of the command arguments.

  • Fix a RCE vulnerability in the shim bootloader used for Secure Boot support.See PSA-2024-00007-1 for details.

  • The list of allowed VLAN IDs of VLAN-aware bridges (bridge-vids) can now be edited in the GUI (issue 3893).

  • Ship an updated version of the open-iscsi package, fixing an issue reported upstream, but not yet available in Debian.

  • Update the provided r8125-dkms package, needed some of the commonly seen Realtek 2.5G NICs, to version 9.013.02-1.

  • Improvements to ifupdown2:Do not set the IPv6 stateless address autoconfiguration sysctl twice on a bridge interface.Fix a failing installation of the package in a chrooted environment, as created by debootstrap for example (issue 5869).Fix VXLAN configuration if only one VXLAN interface is defined.Skip calling files left behind by dpkg (e.g. .dpkg-old.dpkg-new) in the pre- and post-up directories, as this can cause outages when switching from ifupdown (issue 5197).

  • Use the correct base64url decoder, instead of base64 for EAB (external account bindings) in the ACME implementation.

  • Log warnings to the syslog for better visibility.Previously, warnings outside of a task were not logged at all, and task warnings were only logged to the task log.Now, in both cases warnings will also be visible in the syslog.

  • Show an informative error message if a network interface name exceeds the kernel-side length limit (issue 5454).

  • Avoid an error on systems where /etc/apt/sources.list does not exist (issue 5513).

  • Fix an issue where editing the network configuration via the GUI would drop Open vSwitch options with value 0 (issue 5623).

  • Fix an issue where the pve7to8 script did not detect 6.8 kernels.

  • Documentation for CLI commands now uses double-hyphen as argument prefix instead of the outdated single-hyphen.

  • Fix an issue where the documentation for CLI aliases did not mention the complete aliased command.

  • Correct return schemas of various API endpoints (issue 5753).

Installation ISO

  • Add a post-installation notification mechanism for automated installations (issue 5536).This mechanism can be configured with the new post-installation-webhook section in the answer file.

  • Add support for running a custom script on first boot after automated installation (issue 5579).The script can be provided in the ISO or fetched from a URL.

  • Allow users to set hashed passwords (instead of plaintext passwords) in the proxmox-auto-installer answer file.

  • Allow users to customize the label of the partition from which the automated installer fetches the answer file.This adds the --partition-label option to the proxmox-auto-install-assistant prepare-iso command.Previously, the partition label was hardcoded to PROXMOX-AIS.

  • Add ability to detect and rename an existing ZFS pool named rpool during the installation.

  • Add BTRFS compress option selector in the advanced disk options during installation (issue 5250).

  • Improve the email address validation to include a broader set of email address formats.This implements the email validation check specified in the HTML specification.

  • The text-based installer now fails if no supported NIC was found, similar to graphical installer.

  • Improve UI consistency by adding the missing background layer for the initial setup error screen in the text-based installer.

  • Improve usability for small screens by adding a tabbed view for the advanced options at the disk selection step in the text-based installer.This change only affects screens with a screen width of less than or equal to 80 columns.

  • Fix an issue with ISOs generated with the proxmox-auto-install-assistant which caused the user to end up in the GRUB shell when booting from a block device (e.g. an USB flash drive) in UEFI mode.

  • Fix a bug which caused some kernel parameters related to the automated installer to be removed incorrectly.

  • Fix a bug which caused the installer to not detect Secure Boot in some cases.

  • Ask the user for patience while making the system bootable if multiple disks are configured, as this may take longer than expected.

  • Preserve the nomodeset kernel command-line parameter.A missing nomodeset parameter has caused display rendering issues when booting the finished Proxmox VE installation on some systems (see this comment for more information).

  • Ship the recent version 7.20 of memtestx86+, adding support for current CPU Generations (Intel's Arrow Lake and Ryzen 9000 series) as well as preliminary NUMA support.

  • Improve user-visible error and log messages in the installer.

  • Improve documentation for the proxmox-auto-install-assistant.

  • Improve error reporting by printing the full error message when the installation fails in proxmox-auto-installer.

  • Improve error reporting by printing the full error message when mounting and unmounting the installation file system fails in proxmox-chroot.

  • Improve debugging and testing by enumerating the installation environment anew (e.g. when running the command dump-env).

  • Send the correct content-type charset utf-8 when fetching answer files from a HTTP server during automated installation.

  • Switch the text-based installer rendering backend from termion to crossterm.

Notable bugfixes and general improvements

  • Since the release of Proxmox VE 8.2 the Proxmox team has begun tracking explicit security issues publicly in our forum.Following the posts there is highly recommended.

  • Kernel 6.8.12-4 fixes issues present in previous kernels of the 6.8 series, most notably:

    • Backport a fix for a security issue where a malicious guest with a VirtIO-net device could cause out-of-bound access in the host kernel and with certain hardware, even cause a kernel panic.

    See PSA-2024-00008-1 for details.

    • Backport a fix for sudden host reboots with certain AMD Zen4 CPU models.

    • Backport a fix for NFSv4 connection loss (issue 5558).

    • Backport a fix for a memory leak in the CIFS/SMB client.

    • Backport a fix for boot failures on setups using certain models of Adaptec RAID controllers (issue 5448).

    • Fix a rare issue where files from CephFS mounts would be read incompletely.

    • Backport a fix for kernel crashes on setups using (unsupported) MD-RAID.

    • Backport a patch improving e1000e stability on cable reconnection (issue 5554).

    • Backport a fix for a regression that made it impossible to manually power on LEDs in certain setups.

Known Issues & Breaking Changes

Proxmox VE IPAM (tech-preview): Change in backing path during upgrade for IPAM state and MAC-map cache file

During a cluster upgrade, changes made to the Proxmox VE IPAM on nodes that are not yet upgraded will be lost. The reason is that during the upgrade of the libpve-network-perl package on the first node, files used by the IPAM database are migrated to a new location.

Changes to the IPAM state file can be triggered by creating or starting guests with network devices on SDN VNets with DHCP enabled. You can still migrate guests from nodes with the old version to nodes with the new version during the upgrade process.

Комментарии (9)


  1. aborouhin
    21.11.2024 22:55

    Новый брандмауэр на основе nftables

    Аллилуйя :) Последняя причина, по которой я не решался закопать стюардессу iptables и наконец перейти на nftables на всех своих серверах (ну, прежде всего, в Ansible плейбуках, которыми оные настраиваются), - это был Proxmox. Теперь ничего не стоит между мной и лучшими современными практиками :)


  1. idle0
    21.11.2024 22:55

    А как там организуется storage? Есть что-то похожее на VSAN?


    1. achekalin Автор
      21.11.2024 22:55

      То, о чем Вы говорите, в Proxmox VE построено на Ceph. А вообще со стораджами там довольно разнообразно, разве что полного аналога совместного использования единого стораджа, как VMware VMFS, увы, нет.


      1. Speccyfan
        21.11.2024 22:55

        Что значит нет? Разворачивал Proxmox и подавал на него обычные луны по FC, поверх которых использовал OCFS2.


        1. achekalin Автор
          21.11.2024 22:55

          Спасибо, не знал, не пробовал.

          По ссылке в моём ответе выше список официально поддерживаемых стораджей. Скажем, там нет md, хотя никто не мешает его собрать и использовать самостоятельно - просто в интерфейсе прокса об этом не будет особой информации, ну и не будет официального варианта собрать такой сторадж штатными средствами прокса. OCFS2, как я понимаю, Вы монтировали в локальный каталог, т.е. для прокса это локальная ФС, а что она при этом кластерная, сетевая, и работает совсем не как локальный SSD - это его формально не касается. Начал искать - да, так люди делают.

          Мониторинг её работы при этом, конечно, на осознающем все прелести технологии владельце/администраторе кластера.

          Другое дело, что на хост виртуализации (точнее, в кластер таких хостов) обычно переезжают нужные машины, т.е. хочется надежной работы всего комплекса. Строить хранение на том, о чём авторы официально никак не высказались, и поддержку чего не встроили никто не запрещает, но есть ли смысл в плане надежности дальнейшей работы - это вопрос. Для прямо боевого использования кластера я бы подумал много раз, прежде чем рискнуть, мало ли на какой нагрузке и через сколько лет вылезет какой момент в работе?


          1. Testman2023
            21.11.2024 22:55

            Начал искать - да, так люди делают.

            Есть продолжение.
            Problem with PVE 8.1 and OCFS2 shared storage with io_uring
            https://forum.proxmox.com/threads/problem-with-pve-8-1-and-ocfs2-shared-storage-with-io_uring.140273/
            Bug 5430 - OCFS2 io_uring read/write issues in 6.8.4-2-pve
            https://bugzilla.proxmox.com/show_bug.cgi?id=5430


            1. achekalin Автор
              21.11.2024 22:55

              Увы, когда собираешь вместе сложные технологии, получаешь сложные, порой плавающие проблемы. И, если собирать кластер виртуализации для серьезного дела, то несерьезным отношением можно только себе в ногу выстрелить.

              Как писали сами проксмоксы на своём форуме: мол, вы делаете хост виртуализации, чтобы сэкономить, в силу лучшей утилизации железа, на покупке десятка машин, пусть и послабее - так купите этот один (более мощный) сервер получше, с приличным железом, всё равно же экономите.

              А сколько таких багов на форуме и в багзилле - ну, много там всякого, и частью они именно в стиле "я ужа и ежа скрестил, хотя никто этого раньше не тестировал, помогите мне".


              1. Testman2023
                21.11.2024 22:55

                Похоже нужен патч ядра Linux.
                https://www.altlinux.org/OCFS2
                "...OCFS2 не поддерживает новый (на 2022 год) интерфейс ядра Linux для асинхронного ввода/вывода io_uring, поэтому могут быть проблемы с программами, использующими io_uring (например, PVE, нужно использовать другие типы асинхронного ввода/вывода - native и threads)..."


                1. achekalin Автор
                  21.11.2024 22:55

                  Не в проде же такое собирать, правда?