SPICE 虛擬機連線方式如何設定可同時多人連線

Multiple Clients Summary

Support multiple concurrent connections to a single spice server.
Owner

Alon Levy and Yonit Halperin
Current Status

Targeted Release: spice-0.10
Experimental status, see below to enable.
Description

This feature is still experimental, it is not expected to work correctly under different client bandwidths, although it should not crash the server.

To enable:

export SPICE_DEBUG_ALLOW_MC=1
Then launch qemu vm as usual.

The stdout will contain an additional line:

spice: allowing multiple client connections
You can then launch clients as usual. A second client connection will not disconnect any of the previous clients. There is no set limit. The behavior of each channel in multiple client mode:

inputs – shared. All clients can supply mouse and keyboard.
display – shared. All clients receive display updates.
cursor – shared. All clients receive cursor updates.
playback – first connection after no connections. To do: make it shared.
record – first connection after no connections. To do: make it shared.
smartcard – first connection after no connections. To do: make it shared.
usbredir – first connection after no connections. To do: make it shared.
agent – ?
NB: The main channel is not on that list because it isn’t a user visible channel. Every client has it’s own main connection.

NB: First connection after no connections: To receive this channel you will have to connect after explicitly disconnecting all the previous clients. Otherwise you will easily reach a situation where none of the clients have any of those channels:

connect A
connect B (while A is connected)
disconnect A (B is still connected)
connect C
Now both B and C only have {inputs,display,cursor}, no one has any other (playback,cursor,smartcard) connection. Specifically, the server didn’t advertise those channels to clients B and C.

MultipleClientsImplementationNotes contains old design notes, not all implemented. To do: pick up notable notes from there over here.

Category: ProposedFeatures

from:http://www.spice-space.org/page/Features/MultipleClients

我沒成功過 @.@"

VMplayer 客端 XP 轉至 Linux KVM 出現 intelppm.sys,processr.sys BSOD 錯誤

將原先運行在VMplayer上的XP系統,搬到Proxmox VE上跑。卻出現intelppm.sys,processr.sys 之類的藍色死當畫面。經google搜尋及測試後推測應該是CPU配置上出問題造成。

resolved by booting into safe mode on the guest and running:
sc config p3 start= disabled
sc config processor start= disabled
sc config intelppm start= disabled

似乎是執行設定讓 XP 系統啟動時,關閉某些CPU服務的啟動。就可以解決這樣的錯誤。

INTELPPM.SYS 0X0000007E

INTELPPM.SYS 0X0000007E

遇到此問題是發生在 P2V ,將實體 XP 轉至 KMV 上運行。可能因為 Host 端安裝 kvm-spice 套件,造成運行虛擬 XP 時 CPU 的配置出錯。在 Host 端將該套件移除後,虛擬 XP 也就不再出現同樣的錯誤了。

進入 安全模式,在命令提示符下輸入如下命令「sc config intelppm start = disabled」。

processr.sys 的問題也能這樣做,命令「sc config processor start = disabled」。

或是,進入安全模式後,重命名 C:\WINDOWS\system32\drivers 下的 intelppm.sys 文件也能解決此問題。

參考:http://dylanc.twbbs.org/?p=23

Proxmox VE : accessing COM port from the host in a VM

From: http://blog.wains.be/2010/01/05/proxmox-ve-accessing-com-port-from-the-host-in-a-vm/

Proxmox 使用 qemu/kvm manager (qm) 工具管理 qemu server,「qm」是它的主要用來管理的命令。由 qm 的手冊得知「args」是 VM CONFIGURATION 中的一條屬性設定,其目的是將參數傳遞至「kvm」。

因此在 qemu-server 設定檔中加上一行「args: -serial /dev/ttyS0」,則表示在「kvm」依設定檔配置虛擬機會跟據「args」裏的參數,也虛擬一個serial port 裝置,虛擬的裝置仍映射至 Host 端 /dev/ttyS0 的裝置上。

args: ...
    Note: this option is for experts only. It allows you to pass arbitrary arguments to kvm, for example:

    args: -no-reboot -no-hpet

而 kvm 配置虛擬機硬體 serial port 裝置時,也有多樣的設定方式不是只能從 Host 端直接映射。完整設定方式可由 man  qemu 查詢得知。

       -serial dev
           Redirect the virtual serial port to host character device dev. The default device is "vc" in
           graphical mode and "stdio" in non graphical mode.

           This option can be used several times to simulate up to 4 serial ports.

           Use "-serial none" to disable all serial ports.

           Available character devices are:

           vc[:WxH]
               Virtual console. Optionally, a width and height can be given in pixel with

                       vc:800x600

               It is also possible to specify width or height in characters:

                       vc:80Cx24C

           pty [Linux only] Pseudo TTY (a new PTY is automatically allocated)

           none
               No device is allocated.

           null
               void device

           /dev/XXX
               [Linux only] Use host tty, e.g. /dev/ttyS0. The host serial port parameters are set according
               to the emulated ones.

           /dev/parportN
               [Linux only, parallel port only] Use host parallel port N. Currently SPP and EPP parallel port
               features can be used.

           file:filename
               Write output to filename. No character can be read.

           stdio
               [Unix only] standard input/output

           pipe:filename
               name pipe filename

           COMn
               [Windows only] Use host serial port n

           udp:[remote_host]:remote_port[@[src_ip]:src_port]
               This implements UDP Net Console.  When remote_host or src_ip are not specified they default to
               0.0.0.0.  When not using a specified src_port a random port is automatically chosen.

               If you just want a simple readonly console you can use "netcat" or "nc", by starting QEMU
               with: "-serial udp::4555" and nc as: "nc -u -l -p 4555". Any time QEMU writes something to
               that port it will appear in the netconsole session.

               If you plan to send characters back via netconsole or you want to stop and start QEMU a lot of
               times, you should have QEMU use the same source port each time by using something like
               "-serial udp::4555@4556" to QEMU. Another approach is to use a patched version of netcat which
               can listen to a TCP port and send and receive characters via udp.  If you have a patched
               version of netcat which activates telnet remote echo and single char transfer, then you can
               use the following options to step up a netcat redirector to allow telnet on port 5555 to
               access the QEMU port.

               "QEMU Options:"
                   -serial udp::4555@4556

               "netcat options:"
                   -u -P 4555 -L 0.0.0.0:4556 -t -p 5555 -I -T

               "telnet options:"
                   localhost 5555

           tcp:[host]:port[,server][,nowait][,nodelay]
               The TCP Net Console has two modes of operation.  It can send the serial I/O to a location or
               wait for a connection from a location.  By default the TCP Net Console is sent to host at the
               port.  If you use the server option QEMU will wait for a client socket application to connect
               to the port before continuing, unless the "nowait" option was specified.  The "nodelay" option
               disables the Nagle buffering algorithm.  If host is omitted, 0.0.0.0 is assumed. Only one TCP
               connection at a time is accepted. You can use "telnet" to connect to the corresponding
               character device.

               "Example to send tcp console to 192.168.0.2 port 4444"
                   -serial tcp:192.168.0.2:4444

               "Example to listen and wait on port 4444 for connection"
                   -serial tcp::4444,server

               "Example to not wait and listen on ip 192.168.0.100 port 4444"
                   -serial tcp:192.168.0.100:4444,server,nowait

           telnet:host:port[,server][,nowait][,nodelay]
               The telnet protocol is used instead of raw tcp sockets.  The options work the same as if you
               had specified "-serial tcp".  The difference is that the port acts like a telnet server or
               client using telnet option negotiation.  This will also allow you to send the MAGIC_SYSRQ
               sequence if you use a telnet that supports sending the break sequence.  Typically in unix
               telnet you do it with Control-] and then type "send break" followed by pressing the enter key.

           unix:path[,server][,nowait]
               A unix domain socket is used instead of a tcp socket.  The option works the same as if you had
               specified "-serial tcp" except the unix domain socket path is used for connections.

           mon:dev_string
               This is a special option to allow the monitor to be multiplexed onto another serial port.  The
               monitor is accessed with key sequence of Control-a and then pressing c. See monitor access
               pcsys_keys in the -nographic section for more keys.  dev_string should be any one of the
               serial devices specified above.  An example to multiplex the monitor onto a telnet server
               listening on port 4444 would be:

               "-serial mon:telnet::4444,server,nowait"
           braille
               Braille device.  This will use BrlAPI to display the braille output on a real or fake device.

           msmouse
               Three button serial mouse. Configure the guest to use Microsoft protocol.

Proxmox VE : accessing COM port from the host in a VM

Posted on January 5, 2010

If you want to access the COM/serial port of your host machine from a KVM virtual machine in Proxmox VE, simply do the following :

vim /etc/qemu-server/104.conf
where 104 is the ID of the VM

add “args: -serial /dev/ttyS0″ to the end of the file

It should look like this :

name: testVM
ide2: debian-500-i386-netinst.iso,media=cdrom
smp: 1
vlan0: rtl8139=XX:XX:XX:XX:XX:XX
bootdisk: ide0
ide0: vm-104-disk.qcow2
ostype: other
memory: 256
args: -serial /dev/ttyS0

If the VM is already running, you may have to stop the VM completely and start it again for it to see the COM port.

If you simply reboot the machine, it may not see it.

Proxmox VE : http://pve.proxmox.com/wiki/Main_Page

WHAT IS SPICE ? qemu-kvm 聲音輸出、剪貼簿共用

From: http://spice-space.org/page/Main_Page

What is Spice?

SPICE (the Simple Protocol for Independent Computing Environments) is a remote-display system built for virtual environments which allows users to view a computing “desktop" environment – not only on its computer-server machine, but also from anywhere on the Internet and using a wide variety of machine architectures.

about this wiki

With this wiki the Spice team hopes to make it easier to let contributors add their knowledge and help the Spice project. Feel free to edit or add new pages related to Spice, or just search for answer to your questions. This wiki is a source for users and developers documentation and everybody is welcome to improve it.

This Wiki is not intended for the asking of support questions. If you have questions, please refer the support page on Spice site.

Some starting points: (in addition see menu above)

讓 qemu-kvm 虛擬機器,可以主客端共用剪貼簿、客端可以輸出聲音..等等。SPICE 可以搞定這一切。

virt-manager 重新加入系統碟(virtio) 效能低落 客端 XP

virt-manager 重新加入系統碟(virtio) 效能低落 客端 XP

virtio 硬碟移除再重新加入可能會造成 cpu 滿檔的狀況。可以進到系統後,使用裝置管理員將被隱藏不存在的virtio 裝置及控製器移除。

上述的問題可能是新舊裝置互搶資源造成cpu飇高的狀況或是重抓虛擬裝置造成XP系統非常緩慢。

也試過重建立一個新的XP虛擬機器,但使用原先的系統磁碟。這樣也可以解決這樣的問題。

Set up Spice-gtk 0.12-2 via Debian Unstable

From: https://launchpad.net/~bderzhavets/+archive/lib-usbredir87

Set up Spice-gtk 0.12-2 via Debian Unstable on Ubuntu Precise (v.2)

Boris Derzhavets Set up Spice-gtk 0.12-2 via Debian Un…
PPA description

$ sudo add-apt-repository ppa:bderzhavets/lib-usbredir87
$ sudo apt-get update
$ sudo apt-get install qemu-kvm qemu qemu-common qemu-utils \
seabios vgabios \
spice-client libusb-1.0-0 libusb-1.0-0-dev \
libspice-protocol-dev libspice-server-dev \
libusbredirhost-dev libusbredirhost1 \
libusbredirparser-dev libusbredirparser0 \
usbredirserver \
gir1.2-spice-client-glib-2.0 gir1.2-spice-client-gtk-2.0 \
gir1.2-spice-client-gtk-3.0 \
libspice-client-glib-2.0-1 libspice-client-glib-2.0-dev \
libspice-client-gtk-2.0-1 libspice-client-gtk-2.0-dev \
libspice-client-gtk-3.0-1 libspice-client-gtk-3.0-dev \
python-spice-client-gtk spice-client-gtk

$ sudo apt-get install virtinst virt-manager virt-viewer
$ sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/

$ sudo adduser $USER libvirtd

reboot

新增額外的套件庫ppa:bderzhavets/lib-usbredir87」及更新套件庫資料,可由此套件庫安裝spice相關的套件。將上述套件安裝完成後重新開機,我們的virt-manager就能支援spice協定來顯示客端虛擬機。

Speed up FreeBSD KVM guests using Virtio Drive

Virtio has been around for a while in Linux but it’s still fairly new on FreeBSD, with support only arriving in versions 8.2 and 9.0. That’s something of a shame since tools like virt-manager make piss-poor default choices when creating FreeBSD guest VM’s. Like configuring PIO-only IDE controllers from the early 90’s, limited at 16MB/s throughput.. savages! If you do have a capable version of FreeBSD, loading virtio drivers certainly is worth the effort.

The first step after installing FreeBSD 9 would be to get your system up to date. I use freebsd-update for that, as it works a lot faster than compiling the whole OS from source for a minor patch.

#freebsd-update fetch
#freebsd-update install

These commands bring your system up to date, but you’ll need to have the system sources in place in order to build the virtio kernel modules. These modules are still experimental so they aren’t in the base system yet. In order to install them you need to install the emulators/virtio-kmod port. First we pull in sources.

#cp /usr/share/examples/cvsup/standard-supfile /root

In the resulting /root/standard-supfile you should look for the part where it says CHANGE_THIS, and modify that into a valid FreeBSD cvsup server. I always change it into cvsup.nl.freebsd.org but your mileage may vary.

#cd /root
#csup ./standard-supfile

The above commands start the download of all the system sources. They’ll end up in /usr/src/sys and weigh in at about 200MB so make sure you have enough room to store them.

Now you have everything you need to build the virtio modules. From /usr/ports/emulators/virtio-kmod you run the usual commands to install a port:

#make
#make install
#make clean

This takes a while as it copies tons of source files from /usr/src/sys into the port’s work directory, which is why the make clean at the end is important to save you some disk space. These steps give you compatible kernel modules for loading. Now make sure you actually load them at system startup by adding them to /boot/loader.conf like so:

virtio_load="YES"
virtio_pci_load="YES"
virtio_blk_load="YES"
if_vtnet_load="YES"

Apparently you can pick and choose which drivers to use, but I have yet to find a reason not to simply use all of them.

Before any of the virtio magic will work, you’ll need to tell FreeBSD to actually use it for its network driver and block devices. By default the virt-manager tool sets up any FreeBSD VM with an emulated Intel gigabit network card, which FreeBSD recognizes as em0. Add the following line to /etc/rc.conf to rename your virtio network card to em0 and use any existing network settings.

ifconfig_vtnet0_name="em0″

You should also change your /etc/fstab file to use virtio block devices instead of the emulated IDE drives. A line like:

/dev/ada0p2 / ufs rw 1

..would become:

/dev/vtbd0p2 / ufs rw 1

You’re almost done now. Fire up virt-manager on your host machine and modify the emulated hardware in your FreeBSD VM. Choose ‘virtio‘ for both the emulated network device and for each IDE, SATA or SCSI hard disk you emulate.

Finally, shut down the VM (do not reboot it, really shut it down!), virtually power it back up and things should be hunky-dory with FreeBSD gaining some considerable speed in its network and block I/O layers.

howto change virtio disk to boot on kvm guest os

howto change virtio disk to boot on kvm guest os

create a disk and install os to it by appending “-hda <your_disk_image>" to your virtual machine.

In your guest os, upgrade kernel to 2.6.25 which contains virtio_* drivers. or ubuntu 8.04 also has it.

In guest os, change /boot/grub/device.map from “(hd0) /dev/sda" to “(hd0) /dev/vda"

In guest os, change /boot/grub/menu.list from “root=/dev/sda1" to “root=/dev/vda1“, if you are using UUID, then no need to do this step.

enable para-virtualization by changing “-hda <your_disk_image" to “-drive file=<your_disk_image>,if=virtio,boot=on

Could not access KVM kernel module: No such file or directory

Proxmox VE 2.1相關報錯解決

如果創建虛擬機的時候提示如下錯誤:

Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
No accelerator found!
TASK ERROR: start failed: command 『/usr/bin/kvm -id 100 -chardev 』socket,id=monitor,path=/var/run/qemu-server/100.mon,server,nowait’ -mon 『chardev=monitor,mode=readline’ -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -usbdevice tablet -name xp -smp 』sockets=1,cores=1′ -nodefaults -boot 『menu=on’ -vga cirrus -localtime -rtc-td-hack -k en-us -drive 『file=/var/lib/vz/template/iso/cn_windows_7_ultimate_x86_dvd_x15-65907.iso,if=none,id=drive-ide2,media=cdrom,aio=native’ -device 『ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200′ -drive 『file=/mnt/pve/KS86/images/100/vm-100-disk-1.raw,if=none,id=drive-ide1,aio=native,cache=none’ -device 『ide-hd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1,bootindex=100′ -m 2048 -netdev 『type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge’ -device 『rtl8139,mac=02:56:EB:46:6A:60,netdev=net0,bus=pci.0,addr=0×12,id=net0,bootindex=300′ -cpuunits 1000′ failed: exit code 1

那很簡單。可能是沒有啟用BIOS中的虛擬化功能,因為KVM需要主機板虛擬化支持。主機板裡面選項是「Intel虛擬化」開啟;其他主板,找到VT相關選項,開啟即可解決。記得AMD虛擬化功能選項為「SVM」。