分享

Understanding DMB

 老匹夫 2015-01-03

Understanding DMB




原创文章,转载请注明出处.转载自: Li Haifeng's Blog
本文链接地址: Understanding DMB




DMB: Data memory barrier

理解DMB指令,先看下面例子,在core 0和core1上同时跑两个不同的指令(如下表所示)















core 0core 1

Write A;

Write B;


Load B;

Load A;

这里core0在执行两个指令,写A B两个值的时候,可能会发生乱序也可能Write A时发生Cache Miss,那么就会导致在cache中 A的最新值更新慢于B的最新值。于是在core1中的指令Load B就会拿到新值,而Load A 就会拿到旧值。如果A与B有相互关系的话,便可能产生死锁等问题。这里有一个典型的例子:https:///lkml/2012/7/13/123 

于是,就有了下面的解决方法:















core 0core 1

Write A

DMB;

Write B


Load B

Load A

在core0所执行的两条指令之间加入一个DMB. 这样,若core1在Load B时,拿到了最新值。那么Load A 也一定拿到了最新值。这就是DMB的作用:DMB前面的LOAD/STORE读写的最新值的acknowledgement在时间上一定先于DMB之后的指令。

DSB 和DMB容易混淆。他们的区别在于:DMB可以继续执行之后的指令,只要这条指令不是内存访问指令。而DSB不管它后面的什么指令,都会强迫CPU等待它之前的指令执行完毕。其实在很多处理器设计的时候,DMB和DSB没有区别(DMB完成和DSB同样的功能)。他们以及ISB在arm reference中的解释如下[1]:


  • Data Synchronization Barrier (DSB) completes when all instructions before this instruction complete.

  • Data Memory Barrier (DMB) ensures that all explicit memory accesses before the DMB instruction complete before any explicit memory accesses after the DMB instruction start.

  • An Instruction Synchronization Barrier (ISB) flushes the pipeline in the processor, so that all instructions following the ISB are fetched from cache or memory, after the ISB has been completed.


ISB不仅做了DSB所做的事情,还将流水线清空[2]。于是他们的重量级排序可以是:ISB>DSB>DMB :-)



参考:







Post Footer automatically generated by wp-posturl plugin for wordpress.





September 13th, 2013 in
Uncategorized |
暂无评论





How to Debug Crash Linux for ARM




原创文章,转载请注明出处.转载自: Li Haifeng's Blog
本文链接地址: How to Debug Crash Linux for ARM



 



It’s known that there are many tools to debug crash Linux for X86 architecture, such as kdump, LKCD, etc. Although many debugging tools claim that they could support ARM architecture, they are unstable. Is there a reliable method for ARM SOC? Yes, this blog will show a stable method for debugging ARM Linux. There is a premise that you could dump the memory snapshot by some assistive hardware tools when crash happen. This premise is easy for ARM SOC company.


This method for debugging ARM Linux is to compose a crash image and analyze the crash kernel with the aid of Crash Utility.


Now I will explain this method by step.

<!--[if !supportLists]-->1.       <!--[endif]-->Build Linux Kernel for ARM SOC. Please make sure that the feature kexec and debug info are enable.

<!--[if !supportLists]-->·         <!--[endif]-->Boot options  —> 

<!--[if !supportLists]-->·         <!--[endif]-->[*] Kexec system call (EXPERIMENTAL)

<!--[if !supportLists]-->·         <!--[endif]-->[*] Export atags in procfs (NEW)

<!--[if !supportLists]-->·         <!--[endif]-->Kernel hacking  —>

<!--[if !supportLists]-->·         <!--[endif]-->[*] Compile the kernel with debug info

<!--[if !supportLists]-->2.       <!--[endif]-->Modify kernel source code for kernel/kexec.c

  12 — a/kernel/kexec.c

  13+++ b/kernel/kexec.c

  14@@ -1089,14 +1089,16 @@ void crash_kexec(struct pt_regs *regs)

  15         * sufficient.  But since I reuse the memory…

  16         */

  17        if (mutex_trylock(&kexec_mutex)) {

  18-               if (kexec_crash_image) {

  19+//             if (kexec_crash_image) {

  20                        struct pt_regs fixed_regs;


  22                        crash_setup_regs(&fixed_regs, regs);

  23                        crash_save_vmcoreinfo();

  24                        machine_crash_shutdown(&fixed_regs);

  25-                       machine_kexec(kexec_crash_image);

  26-               }

  27+                       flush_cache_all();

  28+//                     machine_kexec(kexec_crash_image);

  29+//             }

  30                mutex_unlock(&kexec_mutex);

  31        }

<!--[if !supportLists]-->3.       <!--[endif]-->Build Linux Kernel.

<!--[if !supportLists]-->4.       <!--[endif]-->After the Linux Kernel run and the console shows, please output two physical address.

<!--[if !supportLists]-->a.    <!--[endif]-->cpu_notes_paddr: $cat sys/devices/system/cpu/cpu0/crash_notes

<!--[if !supportLists]-->b.   <!--[endif]-->vmcore_notes_paddr: $cat /sys/kernel/vmcoreinfo

<!--[if !supportLists]-->5.       <!--[endif]-->When crash happens, please use some assistive hardware method to dump the memory snapshot.

<!--[if !supportLists]-->6.       <!--[endif]-->Compose a available image for Crash Utility.

        This image is ELF format. There are some program headers. If the memory is flat sequence , two program headers are enough. One program header is for cpu_notes and vmcore_notes, the other is for memory snapshot. I have write this which could be referenced. The code could be found at https://github.com/tek-life/dump-crash-tool.

<!--[if !supportLists]-->7.       <!--[endif]-->After step 6 is completed, new file ‘newvmcore’ could be got. Then we could use Crash Utility to analyze the crash Linux Kernel.

$./crash vmlinux newvmcore

‘vmlinux’ is the kernel image with debug information. ‘newvmcore’ is got from step 6.


Following text is the Crash Utility output.


$ ./crash examples/vmlinux examples/vmcore


crash 6.1.5

Copyright (C) 2002-2013  Red Hat, Inc.

Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation

Copyright (C) 1999-2006  Hewlett-Packard Co

Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited

Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.

Copyright (C) 2005, 2011  NEC Corporation

Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.

Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.

This program is free software, covered by the GNU General Public License,

and you are welcome to change it and/or distribute copies of it under

certain conditions.  Enter “help copying” to see the conditions.

This program has absolutely no warranty.  Enter “help warranty” for details.


GNU gdb (GDB) 7.3.1

Copyright (C) 2011 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later <http:///licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law.  Type “show copying”

and “show warranty” for details.

This GDB was configured as “–host=i686-pc-linux-gnu –target=arm-linux-gnueabi”…


      KERNEL: examples/vmlinux

    DUMPFILE: examples/vmcore

        CPUS: 1

        DATE: Sat Jan  1 10:29:39 2000

      UPTIME: 00:00:51

LOAD AVERAGE: 0.08, 0.03, 0.01

       TASKS: 22

    NODENAME: Linux

     RELEASE: 3.4.5-00527-g320e261-dirty

     VERSION: #506 Fri May 10 15:57:51 CST 2013

     MACHINE: armv7l  (unknown Mhz)

      MEMORY: 128 MB

       PANIC: “[   51.520000] Internal error: Oops: 817 [#1] ARM” (check log for details)

         PID: 297

     COMMAND: “sh”

        TASK: c7870900  [THREAD_INFO: c793a000]

         CPU: 0

       STATE: TASK_RUNNING (PANIC)


crash>









Post Footer automatically generated by wp-posturl plugin for wordpress.





June 3rd, 2013 in
Uncategorized |
暂无评论





Understanding Kdump (How to make crashnote)




原创文章,转载请注明出处.转载自: Li Haifeng's Blog
本文链接地址: Understanding Kdump (How to make crashnote)




crashnote contains register status when crash happens. In kernel, this data is stored “note_buf_t __percpu *crash_notes”.


include/linux/kexec.h

typedef u32 note_buf_t[KEXEC_NOTE_BYTES/4];


crashnote is also one part of /proc/vmcore. At first, crashnotes[] address, got by reading sys/devices/system/cpu/cpu0/crash_notes, is stored as one program header, which could be got from “elfcorehdr”. Making crash_notes is done by crash_save_cpu().


crash_kexec->machine_crash_shutdown->crash_save_cpu:

1206 void crash_save_cpu(struct pt_regs *regs, int cpu)

1207 {

1208         struct elf_prstatus prstatus;

1209         u32 *buf;

1210

1211         if ((cpu < 0) || (cpu >= nr_cpu_ids))

1212                 return;

1213

1214         /* Using ELF notes here is opportunistic.

1215          * I need a well defined structure format

1216          * for the data I pass, and I need tags

1217          * on the data to indicate what information I have

1218          * squirrelled away.  ELF notes happen to provide

1219          * all of that, so there is no need to invent something new.

1220          */

1221         buf = (u32*)per_cpu_ptr(crash_notes, cpu);

1222         if (!buf)

1223                 return;

1224         memset(&prstatus, 0, sizeof(prstatus));

1225         prstatus.pr_pid = current->pid;

1226         elf_core_copy_kernel_regs(&prstatus.pr_reg, regs);

1227         buf = append_elf_note(buf, KEXEC_CORE_NOTE_NAME, NT_PRSTATUS,

1228                               &prstatus, sizeof(prstatus));

1229         final_note(buf);

1230 }


The final layout illustrate below.









Post Footer automatically generated by wp-posturl plugin for wordpress.





April 27th, 2013 in
Uncategorized | tags: , |
暂无评论





Understanding Kdump (How to make vmcoreinfo_note)




原创文章,转载请注明出处.转载自: Li Haifeng's Blog
本文链接地址: Understanding Kdump (How to make vmcoreinfo_note)




vmcoreinfo_note contains crash kernel general information, include os version, page size etc.. In kernel, vmcoreinfo_note is stored on vmcoreinfo_note[]. And, vmcoreinfo_note is one part of /proc/vmcore, which is used for debugging with capture kernel brings up. In blog “Understanding Kdump (Loading Part)“, one program header including vmcoreinfo_note’s address and length was referenced, and this program header could be got from “elfcorehdr”. vmcoreinfo_note[]’s address and length could be got by reading /sys/kernel/vmcoreinfo.

When kexec is configured, this function will be triggered with kernel brings up.



1458 static int __init crash_save_vmcoreinfo_init(void)

1459 {

1460         VMCOREINFO_OSRELEASE(init_uts_ns.name.release);

1461         VMCOREINFO_PAGESIZE(PAGE_SIZE);

1462

1463         VMCOREINFO_SYMBOL(init_uts_ns);

1464         VMCOREINFO_SYMBOL(node_online_map);

1465         VMCOREINFO_SYMBOL(swapper_pg_dir);

1466         VMCOREINFO_SYMBOL(_stext);

1467         VMCOREINFO_SYMBOL(vmlist);

1468

1469 #ifndef CONFIG_NEED_MULTIPLE_NODES

1470         VMCOREINFO_SYMBOL(mem_map);

1471         VMCOREINFO_SYMBOL(contig_page_data);

1472 #endif

1473 #ifdef CONFIG_SPARSEMEM

1474         VMCOREINFO_SYMBOL(mem_section);

1475         VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);

1476         VMCOREINFO_STRUCT_SIZE(mem_section);

1477         VMCOREINFO_OFFSET(mem_section, section_mem_map);

1478 #endif

1479         VMCOREINFO_STRUCT_SIZE(page);

1480         VMCOREINFO_STRUCT_SIZE(pglist_data);

1481         VMCOREINFO_STRUCT_SIZE(zone);

1482         VMCOREINFO_STRUCT_SIZE(free_area);

1483         VMCOREINFO_STRUCT_SIZE(list_head);

1484         VMCOREINFO_SIZE(nodemask_t);

1485         VMCOREINFO_OFFSET(page, flags);

1486         VMCOREINFO_OFFSET(page, _count);

1487         VMCOREINFO_OFFSET(page, mapping);

1488         VMCOREINFO_OFFSET(page, lru);

1489         VMCOREINFO_OFFSET(pglist_data, node_zones);

1490         VMCOREINFO_OFFSET(pglist_data, nr_zones);

1491 #ifdef CONFIG_FLAT_NODE_MEM_MAP

1492         VMCOREINFO_OFFSET(pglist_data, node_mem_map);

1493 #endif

1494         VMCOREINFO_OFFSET(pglist_data, node_start_pfn);

1495         VMCOREINFO_OFFSET(pglist_data, node_spanned_pages);

1496         VMCOREINFO_OFFSET(pglist_data, node_id);

1497         VMCOREINFO_OFFSET(zone, free_area);

1498         VMCOREINFO_OFFSET(zone, vm_stat);

1499         VMCOREINFO_OFFSET(zone, spanned_pages);

1500         VMCOREINFO_OFFSET(free_area, free_list);

1501         VMCOREINFO_OFFSET(list_head, next);

1502         VMCOREINFO_OFFSET(list_head, prev);

1503         VMCOREINFO_OFFSET(vm_struct, addr);

1504         VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER);

1505         log_buf_kexec_setup();

1506         VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES);

1507         VMCOREINFO_NUMBER(NR_FREE_PAGES);

1508         VMCOREINFO_NUMBER(PG_lru);

1509         VMCOREINFO_NUMBER(PG_private);

1510         VMCOREINFO_NUMBER(PG_swapcache);

1511

1512         arch_crash_save_vmcoreinfo();

1513         update_vmcoreinfo_note();

1514

1515         return 0;

1516 }


149 #define VMCOREINFO_OSRELEASE(value)

150         vmcoreinfo_append_str(“OSRELEASE=%sn”, value)

151 #define VMCOREINFO_PAGESIZE(value)

152         vmcoreinfo_append_str(“PAGESIZE=%ldn”, value)

153 #define VMCOREINFO_SYMBOL(name)

154         vmcoreinfo_append_str(“SYMBOL(%s)=%lxn”, #name, (unsigned long)&name)

155 #define VMCOREINFO_SIZE(name)

156         vmcoreinfo_append_str(“SIZE(%s)=%lun”, #name,

157                               (unsigned long)sizeof(name))

158 #define VMCOREINFO_STRUCT_SIZE(name)

159         vmcoreinfo_append_str(“SIZE(%s)=%lun”, #name,

160                               (unsigned long)sizeof(struct name))

161 #define VMCOREINFO_OFFSET(name, field)

162         vmcoreinfo_append_str(“OFFSET(%s.%s)=%lun”, #name, #field,

163                               (unsigned long)offsetof(struct name, field))

164 #define VMCOREINFO_LENGTH(name, value)

165         vmcoreinfo_append_str(“LENGTH(%s)=%lun”, #name, (unsigned long)value)

166 #define VMCOREINFO_NUMBER(name)

167         vmcoreinfo_append_str(“NUMBER(%s)=%ldn”, #name, (long)name)

168 #define VMCOREINFO_CONFIG(name)

169         vmcoreinfo_append_str(“CONFIG_%s=yn”, #name)


1428 void vmcoreinfo_append_str(const char *fmt, …)

1429 {

1430         va_list args;

1431         char buf[0x50];

1432         int r;

1433

1434         va_start(args, fmt);

1435         r = vsnprintf(buf, sizeof(buf), fmt, args);

1436         va_end(args);

1437

1438         if (r + vmcoreinfo_size > vmcoreinfo_max_size)

1439                 r = vmcoreinfo_max_size – vmcoreinfo_size;

1440

1441         memcpy(&vmcoreinfo_data[vmcoreinfo_size], buf, r);

1442

1443         vmcoreinfo_size += r;


1444 }

vmcoreinfo_append_str() store necemmory vmcore information to vmcoreinfo_data[]. The address&length will be got by reading /proc/kernel/vminfo.


1411 static void update_vmcoreinfo_note(void)

1412 {

1413         u32 *buf = vmcoreinfo_note;

1414

1415         if (!vmcoreinfo_size)

1416                 return;

1417         buf = append_elf_note(buf, VMCOREINFO_NOTE_NAME, 0, vmcoreinfo_data,

1418                               vmcoreinfo_size);

1419         final_note(buf);


1420 }

update_vmcoreinfo_note-> update_vmcoreinfo_note

1179 static u32 *append_elf_note(u32 *buf, char *name, unsigned type, void *data,

1180                             size_t data_len)

1181 {

1182         struct elf_note note;

1183

1184         note.n_namesz = strlen(name) + 1;

1185         note.n_descsz = data_len;

1186         note.n_type   = type;

1187         memcpy(buf, &note, sizeof(note));

1188         buf += (sizeof(note) + 3)/4;

1189         memcpy(buf, name, note.n_namesz);

1190         buf += (note.n_namesz + 3)/4;

1191         memcpy(buf, data, note.n_descsz);

1192         buf += (note.n_descsz + 3)/4;

1193

1194         return buf;

1195 }

update_vmcore_info will transfer vmcoreinfo_data to vmcoreinfo_note. When crash happens, another string “CRASHTIME=xxx” will append vmcoreinfo_data, which also will overwrite vmcoreinfo_note. The final layout of the vmcoreinfo_note illustrate below.







Post Footer automatically generated by wp-posturl plugin for wordpress.





April 27th, 2013 in
Uncategorized | tags: , |
暂无评论





Understanding Kdump (Executing Part)




原创文章,转载请注明出处.转载自: Li Haifeng's Blog
本文链接地址: Understanding Kdump (Executing Part)



Capture kernel brings up when panic happens. The specific location in source code and the routine will be analyzed in this blog.



69 void panic(const char *fmt, …)

 70 {

 71         static DEFINE_SPINLOCK(panic_lock);

 72         static char buf[1024];

 73         va_list args;

 74         long i, i_next = 0;

 75         int state = 0;

 76

 77         /*

 78          * It’s possible to come here directly from a panic-assertion and

 79          * not have preempt disabled. Some functions called from here want

 80          * preempt to be disabled. No point enabling it later though…

 81          *

 82          * Only one CPU is allowed to execute the panic code from here. For

 83          * multiple parallel invocations of panic, all other CPUs either

 84          * stop themself or will wait until they are stopped by the 1st CPU

 85          * with smp_send_stop().

 86          */

 87         if (!spin_trylock(&panic_lock))

 88                 panic_smp_self_stop();

 89

 90         console_verbose();

 91         bust_spinlocks(1);

 92         va_start(args, fmt);

 93         vsnprintf(buf, sizeof(buf), fmt, args);

 94         va_end(args);

 95         printk(KERN_EMERG “Kernel panic – not syncing: %sn”,buf);

 96 #ifdef CONFIG_DEBUG_BUGVERBOSE

 97         /*

 98          * Avoid nested stack-dumping if a panic occurs during oops processing

 99          */

100         if (!test_taint(TAINT_DIE) && oops_in_progress <= 1)

101                 dump_stack();

102 #endif

103

104         /*

105          * If we have crashed and we have a crash kernel loaded let it handle

106          * everything else.

107          * Do we want to call this before we try to display a message?

108          */

109         crash_kexec(NULL);

painc->crash_kexec:

1081 void crash_kexec(struct pt_regs *regs)

1082 {

1083         /* Take the kexec_mutex here to prevent sys_kexec_load

1084          * running on one cpu from replacing the crash kernel

1085          * we are using after a panic on a different cpu.

1086          *

1087          * If the crash kernel was not located in a fixed area

1088          * of memory the xchg(&kexec_crash_image) would be

1089          * sufficient.  But since I reuse the memory…

1090          */

1091         if (mutex_trylock(&kexec_mutex)) {

1092                 if (kexec_crash_image) {

1093                         struct pt_regs fixed_regs;

1094

1095                         crash_setup_regs(&fixed_regs, regs);

1096                         crash_save_vmcoreinfo();

1097                         machine_crash_shutdown(&fixed_regs);

1098                         machine_kexec(kexec_crash_image);

1099                 }

1100                 mutex_unlock(&kexec_mutex);

1101         }

1102 }


From the post, when a capture kernel is loaded into memory, kexec_crash_image is be filled. There are 4 steps, which each step corresponds with each line.

<!--[if !supportLists]-->1.       <!--[endif]-->crash_setup_regs(). Save the panic spot’s register. regs is the parameter of panic, and it is passed by exception process handler. Here, the regs will be stored to fixed_regs.

<!--[if !supportLists]-->2.       <!--[endif]-->crash_save_vmcoreinfo(). Recording the time now to vmcoreinfo_data[]. Then transfer to vmcoreinfo_note[].It’s address will be got by reading /sys/kernel/vmcoreinfo.

<!--[if !supportLists]-->3.       <!--[endif]-->machine_crash_shutdown(). Save the fixed_regs to crash_notes[], which will be read by “/sys/devices/system/cpu/cpu0/crash_notes”. Then disable irq by machine_kexec_mask_interrupts().

<!--[if !supportLists]-->4.       <!--[endif]-->machine_kexec(). Copy relocate_new_kernel code to reboot_code_buffer. Then give the control to reboot_code_buffer.


painc->crash_kexec->machine_kexec

 82 void machine_kexec(struct kimage *image)

 83 {

 84         unsigned long page_list;

 85         unsigned long reboot_code_buffer_phys;

 86         void *reboot_code_buffer;

 87

 88

 89         page_list = image->head & PAGE_MASK;

 90

 91         /* we need both effective and real address here */

 92         reboot_code_buffer_phys =

 93             page_to_pfn(image->control_code_page) << PAGE_SHIFT;

 94         reboot_code_buffer = page_address(image->control_code_page);

 95

 96         /* Prepare parameters for reboot_code_buffer*/

 97         kexec_start_address = image->start;

 98         kexec_indirection_page = page_list;

 99         kexec_mach_type = machine_arch_type;

100         kexec_boot_atags = image->start – KEXEC_ARM_ZIMAGE_OFFSET + KEXEC_ARM_ATAGS_OFFSET;

101

102         /* copy our kernel relocation code to the control code page */

103         memcpy(reboot_code_buffer,

104                relocate_new_kernel, relocate_new_kernel_size);

105

106

107         flush_icache_range((unsigned long) reboot_code_buffer,

108                            (unsigned long) reboot_code_buffer + KEXEC_CONTROL_PAGE_SIZE);

109         printk(KERN_INFO “Bye!n”);

110

111         if (kexec_reinit)

112                 kexec_reinit();

113

114         soft_restart(reboot_code_buffer_phys);

115 }

painc->crash_kexec->machine_kexec->soft_restart


131 void soft_restart(unsigned long addr)

132 {

133         u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack);

134

135         /* Disable interrupts first */

136         local_irq_disable();

137         local_fiq_disable();

138

139         /* Disable the L2 if we’re the last man standing. */

140         if (num_online_cpus() == 1)

141                 outer_disable();

142

143         /* Change to the new stack and continue with the reset. */

144         call_with_stack(__soft_restart, (void *)addr, (void *)stack);

145

146         /* Should never get here. */

147         BUG();

148 }


painc->crash_kexec->machine_kexec->soft_restart->call_with_stack


 24 /*

 25  * void call_with_stack(void (*fn)(void *), void *arg, void *sp)

 26  *

 27  * Change the stack to that pointed at by sp, then invoke fn(arg) with

 28  * the new stack.

 29  */

 30 ENTRY(call_with_stack)

 31         str     sp, [r2, #-4]!

 32         str     lr, [r2, #-4]!

 33

 34         mov     sp, r2

 35         mov     r2, r0

 36         mov     r0, r1

 37

 38         adr     lr, BSYM(1f)

 39         mov     pc, r2

 40

 41 1:      ldr     lr, [sp]

 42         ldr     sp, [sp, #4]

 43         mov     pc, lr

 44 ENDPROC(call_with_stack)


painc->crash_kexec->machine_kexec->soft_restart->call_with_stack->__soft_restart


107 static void __soft_restart(void *addr)

108 {

109         phys_reset_t phys_reset;

110

111         /* Take out a flat memory mapping. */

112         setup_mm_for_reboot();

113

114         /* Clean and invalidate caches */

115         flush_cache_all();

116

117         /* Turn off caching */

118         cpu_proc_fin();

119

120         /* Push out any further dirty data, and ensure cache is empty */

121         flush_cache_all();

122

123         /* Switch to the identity mapping. */

124         phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);

125         phys_reset((unsigned long)addr);

126

127         /* Should never get here. */

128         BUG();

129 }

painc->crash_kexec->machine_kexec->soft_restart->call_with_stack->__soft_restart->cpu_v7_reset


 41  *      cpu_v7_reset(loc)

 42  *

 43  *      Perform a soft reset of the system. Put the CPU into the

 44  *      same state as it would be if it had been reset, and branch

 45  *      to what would be the reset vector.

 46  *

 47  *      – loc   – location to jump to for soft reset

 48  *

 49  *      This code must be executed using a flat identity mapping with

 50  *      caches disabled.

 51  */

 52         .align  5

 53         .pushsection    .idmap.text, “ax”

 54 ENTRY(cpu_v7_reset)

 55         mrc     p15, 0, r1, c1, c0, 0           @ ctrl register

 56         bic     r1, r1, #0x1                    @ ……………m

 57  THUMB( bic     r1, r1, #1 << 30 )              @ SCTLR.TE (Thumb exceptions)

 58         mcr     p15, 0, r1, c1, c0, 0           @ disable MMU

 59         isb

 60         mov     pc, r0

 61 ENDPROC(cpu_v7_reset)


painc->crash_kexec->machine_kexec->soft_restart->call_with_stack->__soft_restart->cpu_v7_reset-> relocate_new_kernel


  7         .globl relocate_new_kernel

  8 relocate_new_kernel:

  9

 10         ldr     r0,kexec_indirection_page //image->head is 0

 11         ldr     r1,kexec_start_address  // “image->start” is reserved memory start+32KB.

 12

 13         /*

 14          * If there is no indirection page (we are doing crashdumps)

 15          * skip any relocation.

 16          */

 17         cmp     r0, #0

 18         beq     2f

 19

 20 0:      /* top, read another word for the indirection page */

 21         ldr     r3, [r0],#4

 22

 23         /* Is it a destination page. Put destination address to r4 */

 24         tst     r3,#1,0

 25         beq     1f

 26         bic     r4,r3,#1

 27         b       0b

 28 1:

 29         /* Is it an indirection page */

 30         tst     r3,#2,0

 31         beq     1f

 32         bic     r0,r3,#2

 33         b       0b

 34 1:

 35

 36         /* are we done ? */

 37         tst     r3,#4,0

 38         beq     1f

 39         b       2f

 40

 41 1:

 42         /* is it source ? */

 43         tst     r3,#8,0

 44         beq     0b

 45         bic r3,r3,#8

 46         mov r6,#1024

 47 9:

 48         ldr r5,[r3],#4

 49         str r5,[r4],#4

 50         subs r6,r6,#1

 51         bne 9b

 52         b 0b

 53

 54 2:

 55         /* Jump to relocated kernel */

 56         mov lr,r1

 57         mov r0,#0

 58         ldr r1,kexec_mach_type

 59         ldr r2,kexec_boot_atags

 60  ARM(   mov pc, lr      )

 61  THUMB( bx lr           )


Actual code for kdump feature in relocate_new_kernel is Line 10~18 and Line 54~60.

When pc move to capture kernel’s start address, the registers value lists below.

r0:0

r1: MACH_TYPE_INTEGRATOR

r2:Reserved memory start+1KB

PC:Reserved memory start+32KB


In a word, before soft reboot, system will


<!--[if !supportLists]-->1.       <!--[endif]-->IRQ&FIQ disable (in soft_restart)

<!--[if !supportLists]-->2.       <!--[endif]-->Turn off cache (in __soft_restart)

<!--[if !supportLists]-->3.       <!--[endif]-->Disable MMU(in __soft_restart).


Meanwhile, because of the cache synchronization, flush cache will be necessary.


NOTE, atags elfcorehdr and capture kernel image are loaded by sys_kexec_load.




Post Footer automatically generated by wp-posturl plugin for wordpress.





April 26th, 2013 in
Uncategorized | tags: , |
暂无评论






 

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多