advertisement

lec03

33 %
67 %
advertisement
Information about lec03
Entertainment

Published on December 28, 2007

Author: AscotEdu

Source: authorstream.com

advertisement

3 Instruction-level Parallelism and its Dynamic Exploitation:  3 Instruction-level Parallelism and its Dynamic Exploitation Concepts and Challenges :  Concepts and Challenges ILP Instruction-level parallelism Overlap among instructions Dynamic approach Hardware implementation Basic block A straight-line code sequence with no branches in except the to the entry No branches out except at the exit Loop-level parallelism Unrolling the loop Vector instructions Concepts and Challenges:  Concepts and Challenges Major techniques Forwarding and bypassing Delayed branches and simple branch scheduling Basic dynamic scheduling (scoreboarding) Dynamic scheduling with renaming Dynamic branch prediction Issuing multiple instructions per cycle Speculation Dynamic memory disambiguation Loop unrolling Basic compiler pipeline scheduling Compiler dependence analysis Software pipelining, trace scheduling Compiler speculation MIPS pipelining:  MIPS pipelining B A NPC IR imm NPC IR IR IR LMD B IF ID Ex M Wb MIPS pipelining:  MIPS pipelining Overcoming Data Hazards with Dynamic Scheduling :  Overcoming Data Hazards with Dynamic Scheduling imprecise exceptions The exception raised does not look exactly as if the instructions were executed sequentially in strict program order the pipeline may have already completed instructions that are later in program order the pipeline may have not yet completed some instructions that are earlier in program order Make it difficult to restart execution ID ID ID EX EX EX W W W ID EX W Overcoming Data Hazards with Dynamic Scheduling:  Overcoming Data Hazards with Dynamic Scheduling Change in the pipeline IF stage pending instructions fetched into a queue instructions are issued from the queue Instruction prefetch Change in the pipeline ID stage Issue Decode instructions, check for structural hazards. Read operands Wait until no data hazards, then read operands. Wait in a buffer Change in the pipeline EX stage Multiple function units allows multiple instructions to be in execution at the same time, Overcoming Data Hazards with Dynamic Scheduling:  Overcoming Data Hazards with Dynamic Scheduling Dynamic Scheduling Using Tomasulo’s Approach Register renaming By reservation stations Minimize WAW and WAR hazards Distributed hazard detection and execution control By using reservation stations Buffer the instructions and operands Fetch the operand as soon as it is available CDB bypassing Results are passed directly to functional units Overcoming Data Hazards with Dynamic Scheduling:  Overcoming Data Hazards with Dynamic Scheduling Dynamic Scheduling Using Tomasulo’s Approach Load buffer and Store buffer behave almost exactly like reservation stations require a two-step execution process. Address calculation and memory access Reservation stations tag — occupied (busy) RS[r].busy Op — operation to perform on operands RS[r].op Qj, Qk —The reservation stations that will produce the corresponding source operand; RS[r].Qj Vj, Vk —The value of the source operands. RS[r].Vj Register file Qi —The reservation station number to write this register RegisterState[ ].Qi If not busy then Qi=0 Register value Regs[ ] r: reservation station number tag Op Qj Qk Vj Vk RS: Overcoming Data Hazards with Dynamic Scheduling:  Ex Overcoming Data Hazards with Dynamic Scheduling Dynamic Scheduling Using Tomasulo’s Approach Issue Get the next instruction from the head of the instruction queue If there is a matching reservation station that is empty, issue the instruction to the station Otherwise, there is a structure hazard and the instruction stalls Send the operand values if they are currently in the registers or enter the functional units that will produce the operands into the Qi and Qj fields. Execute When all the operands are available Otherwise, there is a data hazard and the instruction stalls Write results write results on the CDB and from there into the registers and into any reservation stations waiting for this result. IS Ex Wb Overcoming Data Hazards with Dynamic Scheduling:  Overcoming Data Hazards with Dynamic Scheduling Dynamic Scheduling: Examples and the Algorithm :  Dynamic Scheduling: Examples and the Algorithm Tomasulo’s Algorithm: the details Dynamic Scheduling: Examples and the Algorithm:  Dynamic Scheduling: Examples and the Algorithm Drawbacks of Dynamic Scheduling the complexity of the Tomasulo scheme requires a large amount of hardware each reservation station must contain an associative buffer which must run at high speed as well as complex control logic the performance can be limited by the single CDB CDB must interact with each the reservation station the associative tag-matching hardware would need to be duplicated at each station for each CDB. Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction attack the potential stalls arising from control dependences Dynamic branch prediction Basic Branch Prediction branch-prediction buffer or branch history table a small memory indexed by the lower portion of the address of the branch instruction. contains a bit that says whether the branch was recently taken or not (1 bit prediction) it may have been put there by another branch that has the same low-order address bits. performance shortcoming Even if a branch is almost always taken, we will likely predict incorrectly twice, rather than once Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction Basic Branch Prediction EXAMPLE Consider a loop branch whose behavior is taken nine times in a row, then not taken once. What is the prediction accuracy for this branch, assuming the prediction bit for this branch remains in the prediction buffer? The steady-state prediction behavior will mispredict on the first and last loop iterations. a one-bit predictor will mispredict at twice the rate that the branch is not taken. 1 0 T T NT NT Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction Basic Branch Prediction 2 bit prediction Studies of n-bit predictors have shown that the two-bit predictors do almost as well, 11 10 01 00 11 10 01 00 T T T T T T T T NT NT NT NT NT NT NT NT standard 2-bit predictor 2-bit saturating counter Example:  Example Local prediction For (i=1; i<=4; i++) {} Branch history pattern 1110 1110 1110 1110 … Example:  Example Global prediction if (aa==2) aa=0; if (bb==2) bb=0; if (aa!=bb) { is translated as DSUBUI R3,R1,#2 BNEZ R3,L1 ;branch b1 (aa!=2) DADD R1,R0,R0 ;aa=0 L1: DSUBUI R3,R2,#2 BNEZ R3,L2 ;branch b2 (bb!=2) DADD R2,R0,R0 ;bb=0 L2: DSUBU R3,R1,R2 ;R3=aa-bb BEQZ R3,L3 ;branch b3 Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction Correlating Branch Predictors also look at the recent behavior of other branches Global prediction And branch behavior under different history patterns Local prediction 2 level prediction a (1,1) predictor uses the behavior of the last branch to choose from among a pair of one-bit branch predictors an (m,n) predictor uses the behavior of the last m branches to choose from 2m branch predictors, each of which is a n-bit predictor for a single branch The branch-prediction buffer is indexed using a concatenation of the low-order bits from the branch address with the m-bit global history. Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction 2 level prediction A (2,2) predictor with 64 total entries the 4 low-order address bits of the branch and the 2 global bits form a 6-bit index used to index the 64 counters Global branch history History of all branch instruction Local branch history History of each line of branch instruction Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction Tournament Predictors 2-bit predictor using only local information Adaptively Combining Local and Global Predictors using multiple predictors usually one based on global information and one based on local information combining them with a selector the most popular form of multilevel branch predictors use a 2-bit saturating counter per branch to choose among 2 different predictors. The 4 states of the counter dictate whether to use predictor 1 or predictor 2 Global Preditor Local Preditor Preditor Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction Tournament Predictors the ability to select the right predictor for the right branch. The counter is incremented whenever the “predicted” predictor is correct and the other predictor is incorrect, The global Predictor Indexed only by history The local predictor Indexed by address and history Predictor 1 correct / predictor 2 correct Reducing Branch Costs with Dynamic Hardware Prediction:  Reducing Branch Costs with Dynamic Hardware Prediction An Example: the Alpha 21264 Branch Predictor 4K 2-bit counters indexed by the local branch address choose from among a global predictor and a local predictor 4K entries global predictor indexed by the history of the last 12 branches each entry in the global predictor is a standard 2-bit predictor. 2-level local predictor The top level local history table consisting of 1024 10-bit entries For the most recent ten branch outcomes 1K local prediction entries consisting a three-bit saturating counters, Indexed by 10-bit history Global Preditor Local Preditor Preditor High Performance Instruction Delivery:  High Performance Instruction Delivery Branch Target Buffers High Performance Instruction Delivery:  High Performance Instruction Delivery Branch Target Buffers stores the predicted PC after a branch (Jump) If a matching entry is found in the BTB, fetching begins immediately at the predicted PC. Target instruction buffers Store one or more target instructions Faster Allows larger BTB Allows branch folding To obtain 0-cycle unconditional branches substitute the instruction from the branch target buffer in place of the instruction that is returned from the cache High Performance Instruction Delivery:  High Performance Instruction Delivery Integrated Instruction Fetch Units a separate autonomous unit that feeds instructions to the rest of the pipeline. Integrates several functions: Integrated branch prediction Instruction prefetch Instruction buffering Return Address Predictors a technique for predicting indirect jumps Destination address varies at runtime a small buffer of return addresses operating as a stack caches the most recent return addresses Exercise: 3.14 IIFU Tomasulo Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue multiple-issue processors to allow multiple instructions to issue in a clock cycle superscalar processors issue varying numbers of instructions per clock statically scheduled or dynamically scheduled most leading-edge desktop and servers use dynamic scheduling VLIW (very long instruction word) processors issue a fixed number of instructions formatted either as one large instruction or as a fixed instruction packet The later also known as EPIC(Explicitly Parallel Instruction Computers) statically scheduled by the compiler. the compiler has complete responsibility for creating a package of instructions that can be simultaneously issued has static issue capability Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue Statically-Scheduled Superscalar Processors Dynamic issue capability vs static issue capability Problems in dynamic issue instructions issue in order and all pipeline hazards are checked for at issue time 0 to n instructions from the issue packet actually being issued in a given clock cycle. issue checks are sufficiently complex the issue logic determined the minimum clock cycle length or The issue stage is split and pipelined tends to have higher branch penalties further increasing the importance of branch prediction Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue A Statically Scheduled Superscalar MIPS Processor assume 2 instructions can be issued per clock cycle one of the instructions can be a load, store, branch, or integer ALU operation, and the other can be any floating-point operation. require fetching and decoding 64 bits of instructions 3 steps involved in fetch and issue fetch two instructions from the cache determine whether zero, one, or two instructions can issue, issue them to the correct functional unit. Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue Multiple Instruction Issue with Dynamic Scheduling extend Tomasulo’s algorithm run the step in half a clock cycle so that 2 instructions can be processed in one clock cycle. build the logic necessary to handle 2 instructions at once Wide issue logic factors limit the performance imbalance between the functional unit structure of the pipeline and program The amount of overhead per loop iteration The control hazard in a loop Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue Instruction Buffer Issue logic Issue logic Issue logic Issue logic Function Unit Function Unit Function Unit Function Unit …… …… Wide issue logic Baseline superscalar model:  Baseline superscalar model Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue Example Loop: L.D F0,0(R1) ; F0=array element ADD.D F4,F0,F2 ; add scalar in F2 S.D F4,0(R1) ; store result DADDIU R1,R1,#-8 ; decrement pointer ; 8 bytes (per DW) BNE R1,R2,Loop ; branch R1!=R2 Assume a floating-point and an integer operation can be issued on every clock cycle 2 cycle for load, 3 cycle for FP add 2 CDBs Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue 1 iteration every 3 cycles for issue, but 5 cycles for completion Resource conflict and control dependence exists Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue Example Loop: L.D F0,0(R1) ; F0=array element ADD.D F4,F0,F2 ; add scalar in F2 S.D F4,0(R1) ; store result DADDIU R1,R1,#-8 ; decrement pointer ; 8 bytes (per DW) BNE R1,R2,Loop ; branch R1!=R2 Assume a floating-point and an integer operation can be issued on every clock cycle separate integer functional units for effective address calculation and for ALU operations. 2 cycle for load, 3 cycle for FP add 2 CDBs Taking Advantage of More ILP with Multiple Issue:  Taking Advantage of More ILP with Multiple Issue 1 iteration every 3 cycles for issue, but 3 cycles for completion control dependence exists, 2nd CDB is used Hardware-Based Speculation :  Hardware-Based Speculation Problem A wide issue processor may need to execute a branch every clock cycle to maintain maximum performance. just predicting branches accurately may not be sufficient to generate the desired amount of ILP Solution speculating on the outcome of branches and executing the program as if our guesses were correct fetch, issue, and execute instructions, as if branch predictions were always correct Provide mechanisms to handle the situation where the speculation is incorrect. Hardware-Based Speculation:  Hardware-Based Speculation 3 key ideas dynamic branch prediction to choose which instructions to execute speculation to allow the execution of instructions before the control dependences are resolved with the ability to undo the effects of an incorrectly speculated sequence dynamic scheduling to deal with the scheduling of different combinations of basic blocks. Hardware-Based Speculation:  Hardware-Based Speculation Tomasulo’s algorithm can be extended to support speculation separate the bypassing of results among instructions from the actual completion of an instruction allow an instruction to execute and to bypass its results to other instructions without allowing the instruction to perform any updates that cannot be undone Instructions using speculated results become speculative When an instruction is no longer speculative, we allow it to update the register file or memory instruction commit Hardware-Based Speculation:  Hardware-Based Speculation The key idea allow instructions to execute out of order but to force them to commit in order prevent any irrevocable action until an instruction commits such as updating state or taking an exception separate the process of completing execution from instruction commit an additional set of hardware buffers that hold the results of instructions before being committed reorder buffer (ROB) a source of operands for instructions supplies operands in the interval between completion of instruction execution and instruction commit. Hardware-Based Speculation:  Hardware-Based Speculation ROB A circular buffer Entries allocated and deallocated by two revolving pointers Entries allocated to each instruction Strictly in program order Keeps track of the execution status of the instruction Hardware-Based Speculation:  Hardware-Based Speculation ROB fields instruction type opcode branch (has no destination result) store (has a memory address destination) register operation (has register destinations). destination the register number or the memory address value Instruction result Ready The value is ready Address For load/store operation ROB replaces the store buffer ROB[ ].Instruction ROB[ ].Dest ROB[ ].Value ROB[ ].Ready type dest value ready ROB[ ].A Hardware-Based Speculation:  Hardware-Based Speculation Register fields Busy RegisterState[ ].Busy Reorder RegisterState[ ].Reorder Instruction sequence number Qi RegisterState[ ].Qi Value RegisterState[ ].Value Busy reorder Qi Value Hardware-Based Speculation:  Hardware-Based Speculation 4 steps Issue Execute Write result Commit Hardware-Based Speculation:  Hardware-Based Speculation 4 steps Issue If there is an empty station and empty slot in the ROB Mark the reservation station and ROB as busy Send the operands to the reservation station if they are available in the register or ROB Execute Write result Write result on the CDB and from CDB into the ROB Mark the reservation station as empty Commit When an instruction reaches the head of the ROB Mark the ROB as empty When the instruction is a branch with incorrect prediction, indicate the speculation was wrong The ROB is flushed Execution is restarted at the correct successor of the branch Hardware-Based Speculation:  Hardware-Based Speculation Advantages of speculation Precise interrupt the processor with the ROB can dynamically execute code while maintaining a precise interrupt model. flushing any pending instructions in ROB Is Is Is Ex Ex Ex W W W Is Ex W C C C C Hardware-Based Speculation:  Hardware-Based Speculation Advantages of speculation Early recovery from branch misprediction the processor can easily undo its speculative actions when a branch is found to be mispredicted. clearing the ROB for all entries that appear after the mispredicted branch allowing those that are before the branch in the ROB to continue performance is more sensitive to the branch prediction mechanism prefetch Pre-execution commit Hardware-Based Speculation:  Hardware-Based Speculation Exception processing If a speculated instruction raises an exception the exception is recorded in the ROB. not recognizing the exception until it is ready to commit. the exception is flushed along with the instruction when the ROB is cleared If instruction reaches the head of the ROB, it is no longer speculative Control complexity Speculation adds significant complications to the control Control logic Figure 3.32 Hardware-Based Speculation:  Hardware-Based Speculation Advantages of speculation Load and store hazard a store updates memory only when it reaches the head of the ROB WAW and WAR hazards through memory are eliminated with speculation actual updating of memory occurs in order RAW hazards through memory are maintained by not allowing a load to initiate the second step of its execution Check if any store has an Destination field that matches the value of the load store r1, 100(r2) load r3, 100(r2) Hardware-Based Speculation:  Hardware-Based Speculation Multiple Issue with Speculation Process multiple instruction per clock cycle Commit multiple instruction per clock cycle Challenges Instruction issue Monitoring CDBs for instruction completion Hardware-Based Speculation:  Hardware-Based Speculation Design Considerations for Speculative Machines Register renaming versus reorder buffers With speculation, register values may also temporarily reside in the ROB. In register renaming approach, an extended registers is used to hold values. How much to speculate The cost of speculation is exceptional event cache miss, TLB miss Speculating through multiple branches complicates the process of speculation recovery Hardware-Based Speculation:  Hardware-Based Speculation Example On a 2 issue processor Loop: LW R2,0(R1) ;R2=array element DADDIU R2,R2,#1 ; increment R2 SW 0(R1),R2 ;store result DADDIU R1,R1,#4 ;increment pointer BNE R2,R3,Loop ; branch if not last element Assume separate integer functional units for effective address calculation, for ALU operations, and for branch condition evaluation. up to two instructions of any type can commit per clock Hardware-Based Speculation:  Hardware-Based Speculation Without speculation, control dependency is the main performance limitation Hardware-Based Speculation:  Hardware-Based Speculation With speculation, overlapping between iterations Execise :  Execise 3.18 3.19 Studies of the Limitations of ILP:  Studies of the Limitations of ILP The Hardware Model ideal processor all artificial constraints on ILP are removed. Register renaming There are an infinite number of virtual registers available Architecturally visible registers all WAW and WAR hazards are avoided an unbounded number of instructions can begin execution simultaneously Branch prediction Branch prediction is perfect All conditional branches are predicted exactly Jump prediction All jumps are perfectly predicted including jump register used for return and computed jumps an unbounded buffer of instructions available for execution. Memory-address alias analysis All memory addresses are known exactly a load can be moved before a store if the addresses are not identical. To find out how many space do we have to improve ILP Store r1, 100(r2) Load r3, 100(r2) Studies of the Limitations of ILP:  Studies of the Limitations of ILP The Hardware Model can issue an unlimited number of instructions at once all functional unit latencies are assumed to be one cycle perfect caches all loads and stores always complete in one cycle (100% hit). ILP is limited only by the data dependences ILP available in a perfect processor Average amount of parallelism available Studies of the Limitations of ILP:  Studies of the Limitations of ILP The perfect processor must do Look arbitrarily far ahead to find a set of instructions to issue predicting all branches perfectly. Rename all register uses to avoid WAR and WAW hazards. Determine data dependencies among the instructions if so, rename accordingly. Determine memory dependences handle them appropriately. Provide enough replicated functional units to allow all the ready instructions to issue Studies of the Limitations of ILP:  Studies of the Limitations of ILP Determine data dependencies How many comparisons is needed for 3 instruction issue? Only for RAW check 2x2 + 2x1 How many comparisons is needed for n instruction issue? 2(n-1) + 2(n-2) + … + 2x1 = n2 - n 2450 for n=50 All the comparisons is made at the same time Studies of the Limitations of ILP:  Studies of the Limitations of ILP Limitations on the Window Size and Maximum Issue Count The instruction window The set of instructions that are examined for simultaneous execution limits the number of instructions that begin execution in a given cycle limited by the required storage, the comparisons, and a limited issue rate In the range of 32 to 126 Real processors more limited by number of functional units numbers of buses register access ports large window sizes are impractical and inefficient Studies of the Limitations of ILP:  Studies of the Limitations of ILP The effects of reducing the size of the window. Studies of the Limitations of ILP:  Studies of the Limitations of ILP The Effects of Realistic Branch and Jump Prediction Tournament predictor Studies of the Limitations of ILP:  Studies of the Limitations of ILP The Effects of Finite Registers Studies of the Limitations of ILP:  Studies of the Limitations of ILP The Effects of Imperfect Alias Analysis Limitations on ILP for Realizable Processors:  Limitations on ILP for Realizable Processors Realizable Processors Up to 64 instruction issue per clock Logic complexity A tournament predictor with 1K entries and 16-entry return predictor The predictor is not a primary bottleneck Perfect disambiguation of memory references done dynamically Through memory dependence predictor Register renaming with 64 additional integer and 64 additional FP register Limitations on ILP for Realizable Processors:  Limitations on ILP for Realizable Processors Limitations of the perfect processor WAR and WAW hazards through memory Arise due to the allocation of stack frames A called procedure reuses the memory locations of a previous procedure on the stack Unnecessary dependencies Loop contains at least one dependency Which can’t be eliminated dynamically Overcoming the data flow limit Value prediction predicting data values and speculating on the prediction For ( i=0; i<M; i++) { } Limitations on ILP for Realizable Processors:  Limitations on ILP for Realizable Processors Proposals of the realizable processor Address value prediction and speculation Predict memory address values and speculates by reordering loads and stores Can be accomplished by simpler techniques For ( i=0; i<M; i++) { A[i] = … Speculating on multiple paths The cost of incorrect recovery is reduced Only for limited branches Improvement to the realizable processor model Putting It All Together: The P6 Microarchitecture:  Putting It All Together: The P6 Microarchitecture Putting It All Together: The NetBurst Microarchitecture:  Putting It All Together: The NetBurst Microarchitecture The Intel NetBurst micro-architecture provides: The rapid execution engine ALU run at twice the processor frequency Basic integer operations executes in ½ clock tick Hyper-pipelined technology 20-stage pipeline Advanced dynamic execution Deep, out-of-order, speculative execution engine Up to 126 instructions in flight Up to 48 loads and 24 stores in pipeline Enhanced branch prediction capability 4K-entry branch target array Putting It All Together: The NetBurst Microarchitecture:  Putting It All Together: The NetBurst Microarchitecture The Intel Pentium 4 Processor Stream SIMD Extension 2 (SSE2) Support Hyper-Threading technology Intel Xeon Processor Stream SIMD Extension 2 (SSE2) Support Hyper-Threading technology For use in server and high-performance workstations Intel Pentium M Processor Stream SIMD Extension 2 (SSE2) High performance, Low power core Intel Pentium D Stream SIMD Extension 2 (SSE2) Dual-Core Speed-step technology Putting It All Together: The NetBurst Microarchitecture:  Putting It All Together: The NetBurst Microarchitecture Another View: Thread-Level Parallelism:  Another View: Thread-Level Parallelism Simultaneous multithreading cycles Issue slots

Add a comment

Related presentations

Related pages

LEC03 - YouTube

1. Image enhancement • Image histogram • Image contrast adjustment • Image brightness adjustment 2. Image thresholding.
Read more

Lec03 / Statistics One / stats1-2012-001 Friday September ...

Lec03 / Statistics One / stats1-2012-001 Friday September 21 13:17:32 2012 Page 1 name: log: d:pdfsstatistical analysisStatistics OneLec03_ex ...
Read more

bYTEBoss lec03

ECE U322 Digital Logic Design Lecture 3: Number Ranges Unsigned and Signed Binary numbers Boolean Logic Logic Gates Reading: Marcovitz 1-1, 2-1 Example ...
Read more

Lec03 | Min Ho - Academia.edu

Lec03. Uploaded by Min Ho. 14 Pages. DOWNLOAD. Sign In. Sign up. Before we can start your download, please take a moment to join our community of ...
Read more

lec03-QuantumInformationProcessing - uni-muenchen.de

Title: lec03-QuantumInformationProcessing.jnt Author: vondelft Created Date: 10/26/2009 10:44:19 AM
Read more

Lec03 08 28 15 phys1110 web

Lec03_08_28_15_phys1110_web.pptx Author: Steven Pollock Created Date: 8/27/2015 1:47:47 AM ...
Read more

lec03

lec03 - Free download as PDF File (.pdf), Text file (.txt) or read online for free. free
Read more

Lec03 - Indiana University

1/26/2015 2 Clock Synchronization Two events that are simultaneous in one reference frame O are not simultaneous in another moving reference frame O’
Read more