Hackpads are smart collaborative documents. .
901 days ago
Unfiled. Edited by 李育丞 901 days ago
主要資料結構是使用python裡dictionary的實作,再log2grasp.py也有使用字典
 
909 days ago
Unfiled. Edited by Henry  Su 909 days ago
Henry S Presentation 2 6/16
 
  1. Find out what is the name of the timer in Xvisor?
  1. 找出Guest OS和Hypervisor的timer間的同步方法
  1.  
 
 
 
909 days ago
Unfiled. Edited by 李育丞 909 days ago
  •     gen-timer { /* Generic Timer */
  •         device_type = "timer";
  •         compatible = "arm,armv8-timer";
  •         clock-frequency = <100000000>;
  •         interrupts = <26 30 27>;
  •     };
  •     VMM_CLOCKSOURCE_INIT_DECLARE(gtv8clksrc, "arm,armv8-timer", generic_timer_clocksource_init);
  •     /** 展開成
  •      *__nidtbl struct vmm_devtree_nidtbl_entry __gtv8clksrc = {
  •      *    .signature = VMM_DEVTREE_NIDTBL_SIGNATURE,
  •      *    .subsys = "clocksource",
  •      *    .nodeid.name =" ",
  •      *    .nodeid.type =" ",
  •      *    .nodeid.compatible ="arm,armv8-timer",
  •      *    .nodeid.data = generic_timer_clocksource_init,
  •      *}
  •      */
  •  
  •     static int __init generic_timer_clocksource_init(struct vmm_devtree_node *node)
  •     {
  •         int rc;
  •         struct vmm_clocksource *cs;
  •  
  •         rc =  vmm_devtree_clock_frequency(node, &generic_timer_hz);
  •  
  •         generic_timer_reg_write(GENERIC_TIMER_REG_FREQ,
  •                         generic_timer_hz);
  •  
  •         cs = vmm_zalloc(sizeof(struct vmm_clocksource));
  •  
  •         cs->name = "gen-timer";
  •         cs->rating = 400;
  •         cs->read = &generic_counter_read;
  •         cs->mask = VMM_CLOCKSOURCE_MASK(56);
  •         vmm_clocks_calc_mult_shift(&cs->mult, &cs->shift, 
  •                         generic_timer_hz, VMM_NSEC_PER_SEC, 10);
  •         generic_timer_mult = cs->mult;
  •         generic_timer_shift = cs->shift;
  •         cs->priv = NULL;
  •  
  •         return vmm_clocksource_register(cs);
  •     }
  •  
  •     VMM_CLOCKCHIP_INIT_DECLARE(gtv8clkchip, "arm,armv8-timer", generic_timer_clockchip_init);
  •     /** 展開成
  •      *__nidtbl struct vmm_devtree_nidtbl_entry __gtv8clkchip = {
  •      *    .signature = VMM_DEVTREE_NIDTBL_SIGNATURE,
  •      *    .subsys = "clockchip",
  •      *    .nodeid.name =" ",
  •      *    .nodeid.type =" ",
  •      *    .nodeid.compatible ="arm,armv8-timer",
  •      *    .nodeid.data = generic_timer_clockchip_init,
  •      *}
  •      */
  •  
  •     static int __cpuinit generic_timer_clockchip_init(struct vmm_devtree_node *node)
  •     {
  •         int rc;
  •         u32 irq[3], num_irqs, val;
  •         struct vmm_clockchip *cc;
  •  
  •         /* Determine generic timer frequency */
  •  
  •         rc =  vmm_devtree_clock_frequency(node, &generic_timer_hz);        
  •         generic_timer_reg_write(GENERIC_TIMER_REG_FREQ, generic_timer_hz);
  •  
  •         /* Get hypervisor timer irq number */
  •         rc = vmm_devtree_irq_get(node,
  •                                  &irq[GENERIC_HYPERVISOR_TIMER],
  •                                  GENERIC_HYPERVISOR_TIMER);
  •  
  •         /* Get physical timer irq number */
  •         rc = vmm_devtree_irq_get(node,
  •                                  &irq[GENERIC_PHYSICAL_TIMER],
  •                                  GENERIC_PHYSICAL_TIMER);
  •  
  •         /* Get virtual timer irq number */
  •         rc = vmm_devtree_irq_get(node,
  •                                  &irq[GENERIC_VIRTUAL_TIMER],
  •                                  GENERIC_VIRTUAL_TIMER);
  •  
  •         /* Number of generic timer irqs */
  •         num_irqs = vmm_devtree_irq_count(node);
  •  
  •         /* Ensure hypervisor timer is stopped */
  •         generic_timer_stop();
  •  
  •         /* Create generic hypervisor timer clockchip */
  •         cc = vmm_zalloc(sizeof(struct vmm_clockchip));
  •         cc->name = "gen-hyp-timer";
  •         cc->hirq = irq[GENERIC_HYPERVISOR_TIMER];
  •         ......
  •         cc->set_mode = &generic_timer_set_mode;
  •         cc->set_next_event = &generic_timer_set_next_event;
  •         cc->priv = NULL;
  •  
  •         ........
  •  
...
912 days ago
Unfiled. Edited by 宗穎 沈 912 days ago
        
 
        eg = (struct vmm_devemu_guest_context *)guest->aspace.devemu_priv;
宗穎 沈         
 
   gi->handle = handle;       //vgic_irq_handle
 
而guest os則用 emulators/serial/pl001.c來probe pl001 emulator
rc = vmm_devtree_irq_get(edev->node, &s->irq, 0);
從devtree找到virtual irq number: 33
  
 
                     s->fifo_sz, s);
                     
                     => vser->priv = s;
                     
 
struct vmm_vserial *vser = vmm_vserial_find(name);找到剛剛create的vser(裏面有virq 33)
struct pl011_state *s = vmm_vserial_priv(vser);
 
然後會發到pl011_vserial_send  => pl011_set_irq(s, level, enabled); 
=> 觸發虛擬中斷33 
=> vmm_devemu_emulate_irq(s->guest, s->irq, 0); //s->irq = 33
=>
  • int __vmm_devemu_emulate_irq(struct vmm_guest *guest,
  •                  u32 irq, int cpu, int level)
  • {
  •     eg = (struct vmm_devemu_guest_context *)guest->aspace.devemu_priv;
  •     list_for_each_entry(gi, &eg->g_irq[irq], head) {
  •         gi->handle(irq, cpu, level, gi->opaque);
  •     }
  • }
=> vgic_irq_handle => VGIC_SET_PENDING(s, irq, target);
 guest os接收到virq後,就會透過pl011_reg_read去讀模擬的暫存器
 一般的接收處理流程為:
中斷 => read flag R => read DR
因此當guest os接收到virq 33後,會用pl011_reg_read讀flag,然後讀DR
 
 
916 days ago
Unfiled. Edited by Henry  Su 916 days ago
Henry S Presentation 1 Record
 
  1. "所以根據定義,ISA如果是sensitive,就是control sensitive或者是behavior sensitive,不然就是innocuous (無害的)。另外,如果sensitive instruction為privileged,VMM則可以被覆寫" in Xvisor wiki change to english
  1. 補流程圖
 
923 days ago
Unfiled. Edited by Danny Deng 923 days ago
  • III. OPEN SOURCE HYPERVISORS FOR EMBEDDED SYSTEMS
 
  • KVM為partially monolithic hypervisor,並且支持 fully virtualization跟 para-virtualization
  • KVM延伸Linux的execution mode(Kernel, user, 新增加的guest),使得KVM能利用Linux為hypervisor,這讓Guest OS能與host OS運行相同的execution mode,除了一些特定的指令及register accesses, IO accesses會被trapped到host linux kernel
Danny D
  • Host linux kernel會把VM視為QEMU的process,另外KVM是把CPU在host kernel虛擬化
  • 如同Xen一樣,KVM最大的優點為使用Linux Kernel為host kernel,使得KVM能重複利用現有的Linux device driver,然後這降低了KVM的performance,因為KVM發生page faults, instruction trap, host interrupt, guest IO event等等,需要從Guest mode到Host mode進行world switch,反之亦然
  • 有VirtIO,VirtIO相關資料:
 
  • Xvisor為complete monolithic hypervisor,並且支持full virtualization和paravirtualization.
  • Xvisor主要提供完整地虛擬化guest以及以para-virtualization形式呈現virtIO
  • Xvisor的core component(CPU virtualization, guest IO emulation, background threads, para-virtualization services, management services)是以single software執行,所以不需要前置工具或檔案
  • Device Tree Script (DTS)也是Xvisor很大的特點,Xvisor利用device tree的形式很容易地去管理guest的configuration,所以當我們新增customized guest的時候,不需要去更改原始碼
  • 所有的device drivers直接運行在Xvisor上,並且擁有完整的privilege,並且沒有Xen的nested page table降低performance
 
 
  • V. HOST INTERRUPTS
  • Xen
  • Host device drivers 在Dom0 Linux kernel執行
  • 待看
  • KVM
  • 待看
  • Xvisor
  • Xvisor的host device driver運行在Xvisor的一部分,並且擁有高privilege,所以在執行host interrupt時並沒有scheduling或context switch的overhead,只有在host轉換到guest的時候才有
 
 
  • VI. LOCK SYNCHRONIZATION LATENCY
 
 lock synchronization latency的發生是因為兩個schedulers
  1.  hypervisor scheduler
  1.  guest OS scheduler
由於雙方不知道彼此的存在,所以guest vCPUs會在任何時間下被hypervisor preempted
 
如果lock synchronization處理不好,會有兩個問題使得延長acquire locks for vCPUs of the same guest的waiting time:
  1. vCPU preemption issue 
  • initiated when a vCPU running on a certain host CPU holding a lock is preempted while another vCPU running concomitantly on another host CPU is waiting for that lock. 
  • 此發生在當X vCPU跑在掌握lock的host CPU[A]上,並且被preempted,且Y vCPU運行在另外一個 host CPU[B]並正在等待host CPU[A]的lock
  1. vCPU stacking issue 
  • takes place due to a lock scheduling conflict that occurs on a single host CPU running various vCPUs. That is, a vCPU (vCPU1) accessing a lock is scheduled prior to the vCPU (vCPU0) that is already holding the lock on the same host CPU.
在ARM架構,OS在等待acquire lock的時候會使用Wait For Event (WFE) instruction,在release lock的時候會使用Send Event (SEV) instruction。而WFE instruction可以被hypervisor trap,但是SEV instruction無法被hypervisor trap。
  • 為了解決vCPU stacking issue,這三個hypervisor trap WFE instruction for yielding vCPU time-slice
  • 而vCPU preemption issue則是利用para-virtualized locks解決,此需要修改guest OS的source
 
  • VII. MEMORY MANAGEMENT
 
Embedded systems require efficient memory handling. The overhead sustained by memory management is an important consideration with embedded hypervisors. The ARM architecture provides two-staged translation tables (or nested page tables) for guest memory virtualization. Fig. 13 shows the two-staged MMU on ARM. The guest OS is responsible for programming stage1 translation table which carries out guest virtual address (GVA) to intermediate physical address (IPA) translation. The ARM hypervisors are responsible for programming stage2 translation table to achieve intermediate physical address (IPA) to actual physical address (PA) translation
 
ARM
Translation table walks are required upon TLB misses. The number levels of stage2 translation table accessed through this process affect the memory bandwidth and overall performance of virtualized system. Such that N levels in stage1 translation table and M levels in stage2 translation table will carry out NxM memory accesses in worst-case scenarios. Clearly, the TLB-miss penalty is very expensive for guests on any virtualized system. To reduce TLB-miss penalty in two-staged MMU, ARM hypervisors create bigger pages in stage2 translation table. 
 
 
  • Xen
  •  
  • KVM
  •  
  • Xvisor
  • Xvisor ARM pre-allocates contiguous host memory as guest RAM at guest creation time. It creates a separate three level stage2 translation table for each guest. Xvisor ARM can create 4KB or 2MB or 1GB translation table entries in stage2. Additionally, it always creates the biggest possible translation table entry in stage2 based on IPA and PA alignment. Finally, the guest RAM being flat/contiguous (unlike other hypervisors) helps cache speculative access, which further improves memory accesses for guests. 
  •  
 
 
 
 
 
 
930 days ago
Unfiled. Edited by 宗穎 沈 930 days ago
參考這篇,
下載img-foundation.axf, vexpress64-openembedded_lamp-armv8-gcc-4.9_20150522-720.img
run: sudo ../Foundation_Platformpkg/models/Linux64_GCC-4.1/Foundation_Platform --image  ./build/foundation_v8_boot.axf --block-device ../Foundation_Platformpkg/models/Linux64_GCC-4.1/vexpress64-openembedded_lamp-armv8-gcc-4.9_20150522-720.img --network=nat --network-nat-ports=1234=1234
 
然後再foundation model裏面跑qemu-system-aarch64 -M virt -nographic -s -S
再外面跑
./aarch64-linux-gnu-gdb
(gdb) set debug remote 1
(gdb) target remote :1234
噴出以下:
  • Remote debugging using :1234
  • Sending packet: $qSupported:multiprocess+;qRelocInsn+#2a...Sending packet: $qSupported:multiprocess+;qRelocInsn+#2a...Ack
  • Packet received: PacketSize=1000;qXfer:features:read+
  • Packet qSupported (supported-packets) is supported
  • Sending packet: $Hg0#df...Ack
  • Packet received: PacketSize=1000;qXfer:features:read+
  • Sending packet: $qXfer:features:read:target.xml:0,ffb#79...Ack
  • Packet received: OK
  • Unknown remote qXfer reply: OK
 
QQ,
直接跑都連不上了,再跑到xvisor上應該也不行
 
Members (10)
Danny Deng ajblane 宗穎 沈 謝孟穎 江信則 李育丞 Henry  Su Jim Huang 李明益 lime
Collections

Create a New Collection

Cancel

Move XXX to XXX


XXX will be invited to the XXX on XXX.

Cancel

Contact Support



Please check out our How-to Guide and FAQ first to see if your question is already answered! :)

If you have a feature request, please add it to this pad. Thanks!


Log in