Paradigme)si)tehniciin programarea)paralela)vniculescu/didactic/PP/... · 2020. 10. 22. ·...
Transcript of Paradigme)si)tehniciin programarea)paralela)vniculescu/didactic/PP/... · 2020. 10. 22. ·...
Paradigme si tehnici in programarea paralela
Curs 1
Paradigme si tehnici de programare paralela 1
Structura cursului
Teore%c • No7uni introduc7ve: concurenta, paralelism • Etape in dezvoltarea programelor paralele • Descopunere func7onala versus
descompunerea domeniului de date • Tehnici folosite in dezvoltarea algoritmilor
paraleli – Tehnica arborelui binar – Tehnica dublarii recursive
• Modelul PRAM – masurarea performantei programelor
paralele • Paradigme :
– Master-‐slaves – Task-‐Farm – Work-‐Pool – Divide & Impera
Prac%c • Java threads (low level API) • High-‐level API: pachete Java-‐>
java.u7l.concurrent packages. – Lock Objects – Executors – Executor Interfaces – Thread Pools – Fork/Join – Concurrent Collec7ons – Atomic Variables
• OpenMP • Threading Building Blocks (TBB) • MPI –Message Passing Interface
– exemplificari C, C++
Paradigme si tehnici de programare paralela 2
Bibliografie
• Tutoriale Java: hZp://docs.oracle.com/javase/tutorial/essen7al/concurrency/further.html
• ThreadingBuildingBlock: hZps://www.threadingbuildingblocks.org/ • OpenMP: hZp://openmp.org/ • MPI: hZp://www.mpi-‐forum.org/ • Ian Foster. Designing and Building Parallel Programs, Addison-‐Wesley 1995. • Grama, A. Gupta, G. Karypis, V. Kumar. Introduc7on to Parallel Compu7ng,
Addison Wesley, 2003. • D. Grigoras. Calculul Paralel. De la sisteme la programarea aplica7ilor. Computer
Libris Agora, 2000. • V. Niculescu. Calcul Paralel. Proiectare si dezvoltare formala a programelor
paralele. Presa Univ. Clujana, 2006. • B. Wilkinson, M. Allen, Parallel Programming Techniques and Applica7ons Using
Networked Worksta7ons and Parallel Computers, Pren7ce Hall, 2002
Paradigme si tehnici de programare paralela 3
Evaluare
• Laborator (40%) – Exerci7i simple laborator – Exerci7i -‐> parallel programming paZerns – Programe folosind Java threads, Thread Building Blocks, OpenMp, OpenMP – Graduate students will also do assignments with MPI
• Proiect cu referat (50%) – Program, prezentare, documenta7e
• Ac7vitate in 7mpul semestrului(10%) – prezenta, par7cipare ac7va, ac7vitate in 7mpul laboratorului
4 Paradigme si tehnici de programare paralela
Referat si Proiect
– Prezentarea unei paradigme sau tehnici de programare paralela – Aplica7e paralela Non-‐triviala care foloseste respec7va paradigma/tehnica – Include analiza performantei – Include documenta7e asociata
• Date – alegerea aplica7ei : deadline – saptamana 4 – Prezentarea proiectului in ul7mele 2 saptamani de scoala
5 Paradigme si tehnici de programare paralela
Overview
– Arhitecturi Paralele – Programare paralela – Algoritmi paraleli
6
¦ Parallel performance models and tools ¦ Parallel applications
Paradigme si tehnici de programare paralela
Procesare Paralela
• Un calculator paralel este un calculator (sistem) care foloseste mul%ple elemente de procesare simultana intr-‐o maniera coopera%va pentru a rezolva o problema computa%onala.
• Procesarea Paralela include tehnici si tehnologii care fac posibil calculul in paralel
– Hardware, retele, SO, biblioteci, limbaje, compilatoare, algoritmi …
• Paralelismul este natural. • PERFORMANTA
– Parallelism is very much about performance!
7 Paradigme si tehnici de programare paralela
Paradigme si tehnici de programare paralela 8
Serial vs. Parallel computa7on
Paradigme si tehnici de programare paralela 9
Limite ale programarii seriale
– Viteza de transmisie – • the speed of light (30 cm/nanosecond) and
the transmission limit of copper wire (9 cm/nanosecond).
– Limitarea miniaturizarii – numar de trazistori pe chip. – Limitari economice
• Arhitecturile curente se bazeaza tot mai mult pe paralelism la nivel hardware pentru a imbunata7 performanta :
– Mul7ple execu7on units – Pipelined instruc7ons – Mul7-‐core
Concurenta
• Consideram mai multe taskuri care trebuie executate pe un calculator • Taskurile se considera a fi concurente daca:
– Se pot executa in acelasi 7mp (concurrent execuDon) • Rezulta ca nu exista dependente intre ele;
• Dependente: – Un task are nevoie de rezultatele altora => execu7e dependenta – Daca 2 taskuri sunt dependente atunci nu sunt concurente – Forme de sincronizare trebuie folosite pentru a sa7sface dependentele;
• Concurenta este fundamentala in computer science – Opera7ng systems, databases, networking, …
10 Paradigme si tehnici de programare paralela
Concurenta si Paralelism
• Concurent <> paralel ! • Execu7e Paralela:
– Taskurile concurente se executa efec7v in acelasi 7mp; – Este necesara existenta de mul7ple resurse de calcul
• Paralelism = concurenta + hardware “paralel”
11 Paradigme si tehnici de programare paralela
Paralelism
• Exista mai multe niveluri de paralelism: – Procese, threads, rou7ne, instruc7uni, …
• Trebuie sa fie suportate de resursele hardware – Procesare, cores, … (execu7a instruc7unilor) – Memorii, DMA, retele , … (opera7i asociate)
• Concurenta este o condi7e necesara pentru paralelism
12 Paradigme si tehnici de programare paralela
De ce sa folosim programare paralela?
• Mo7ve primare: – Timp de calcul mai rapid (response Dme) – Rezolvarea probelemelor ‘mari’ de calcul (in 7mp rezonabil de calcul)
• Mo7ve secundare: – Folosirea efec7va a resurselor de calcul – Costuri reduse – Reducerea constrangerilor asociate memoriei – Limitarile masinilor seriale
• Paralelism = concurenta + parallel HW + performanta
13 Paradigme si tehnici de programare paralela
Paradigme si tehnici de programare paralela 14
• Rezolvarea problemelor dificile, mari: – "Grand Challenge" (en.wikipedia.org/wiki/Grand_Challenge) problems requiring PetaFLOPS and
PetaBytes of compu7ng resources. – Web search engines/databases processing millions of transac7ons per second
• Folosirea resurselor non-‐locale:
– SETI@home (se7athome.berkeley.edu) uses over 330,000 computers for a compute power over 528 TeraFLOPS (as of August 04, 2008)
– Folding@home (folding.stanford.edu) uses over 340,000 computers for a compute power of 4.2 PetaFLOPS (as of November 4, 2008)
Direc7i in procesarea paralela
• Arhitecturi paralele – Hardware needed for parallel execu7on? – Computer system design
• Sisteme de operare (Paralel) • Ges7onarea aspectelor sistem pt un calculator paralel • Programare paralela
– Biblioteci (low-‐level, high-‐level) – Limbaje – Medii de dezv. – Souware
• Algoritmi Paraleli • Evaluarea performantei programelor paralele • Parallel tools:
– Performanta, analize, vizualizare, …
15 Paradigme si tehnici de programare paralela
De ce sa studiem programare paralela?
• Arhitecturi de calcul – Inova7ile conduc la noi modele de programare
• Convergenta tehnologica – “killer micro” este peste tot – Laptop-‐urile si supercomputere sunt fundamental similare – Trend-‐urile actuale conduc la convergenta abordarilor diverse
• Tredurile tehnological fac calculul paralel inevitabil – Mul7-‐core processors! – Acum orice sistem de calcul este paralel
• Intelegerea principiilor fundamentale – Programare, comunica7i, memorie, … – Performanta
• Parallelism is the future of compu%ng
16 Paradigme si tehnici de programare paralela
Inevitabilitatea Procesarii Paralele
• Cererintele pt aplica7i – Necesitatea uriasa de cicluri de calcul
• Trenduri tehnologice – Procesare si memorie
• Trenduri Architecturale • Factori economici • Treduri actuale:
– Today’s microprocessors have mul7processor support – Servers and worksta7ons available as mul7processors – Tomorrow’s microprocessors are mul7processors – Mul7-‐core is here to stay and #cores/processor is growing – Accelerators (GPUs, gaming systems)
17 Paradigme si tehnici de programare paralela
Aplica7i informa7ce performante
• Performanta aplica7ilor impune hardware performant (rapid, resurse mul7ple, etc)
• Hardware-‐ul avansat genereaza noi aplica7i • Noile aplica7i au cereri de performanta mai mari
– Crestere exponen7ala a performantei microprocesoarelor – Inova7i in arhitecturile paralele si in integrare
• Cerinte de performanta=> – Performanta sistemelor trebuie sa se imbunatateasca in ansamblu – Schimbari/abordari/reevaluari in Souware engineering – Costuri -‐ technologie avansata
applica7ons performance
hardware
18 Paradigme si tehnici de programare paralela
Paradigme si tehnici de programare paralela 19
PARALLEL AND DISTRIBUTED COMPUTATION
• MANY INTERCONNECTED PROCESSORS WORKING CONCURRENTLY
INTERCONNECTION NETWORK
P2
P3
P1
P4 P5
Pn . . . .
• CONNECTION MACHINE • INTERNET Connects all the computers of the world
Paradigme si tehnici de programare paralela 20
TYPURI DE MULTIPROCESARE PARALLEL DISTRIBUTED
ASPECTE TEHNICE
• PARALLEL COMPUTERS (- IN MOD UZUAL ) LUCREAZA BAZAT PE CUPLARE STRANSA, SINCRONICITATE, MEM. PARTAJATA (intr-o masura mare) CU UN SISTEM DE COMUNICATIE FOARTE RAPID SI FIABIL
• DISTRIBUTED COMPUTERS MAI INDEPENDENTE, COMUNICATIE MAI PUTIN FRECVENTA SI RAPIDA (ASINCRONA) – COOPERARE LIMITATA
SCOPURI
• PARALLEL COMPUTERS COOPEREAZA PENTRU A REZOLVA MAI EFICIENT PROBLEME DIFICILE
• DISTRIBUTED COMPUTERS AU SCOPURI INDIVIDUALE SI ACTIVITATI PRIVATE. DOAR UNEORI INTERCOMUNICAREA ESTE NECESARA
PARALLEL COMPUTERS: COOPERARE IN SENS POSITIV
DISTRIBUTED COMPUTERS: COOPERARE IN SENS NEGATIV, DOAR ATUNCI CAND ESTE NECESARA
Paradigme si tehnici de programare paralela 21
PARALLEL SYSTEMS Suntem interesati sa rezolvam problemele in paralel
FOR DISTRIBUTED SYSTEMS
DYSTRIBUTED SYSTEMS
Suntem interesati sa rezolvam anumite probleme :
• COMMUNICATION SERVICES ROUTING BROADCASTING
• MAINTENANCE OF CONTROL STUCTURE TOPOLOGY UPDATE LEADER ELECTION • RESOURCE CONTROL ACTIVITIES LOAD BALANCING MANAGING GLOBAL DIRECTORIES
Paradigme si tehnici de programare paralela 22
Cluster Model [Munehiro Fukuda]
• Client – Takes a client-‐server model
• Server – Consists of many PC/worksta7ons
connected to a high-‐speed network. – Puts more focus on performance:
serves for requests in parallel. 100Gbps
LAN
Worksta7on
Worksta7on Worksta7on
Master node
Slave 1
Slave N
Slave 2
1Gbps SAN
hZp server1
hZp server2
hZp server N
Paradigme si tehnici de programare paralela 23
High-‐speed Informa7on high way
Grid Compu7ng [Munehiro Fukuda] • Goal
– Collect compu7ng power of supercomputers and clusters sparsely located over the na7on and make it available as if it were the electric grid
• Distributed Supercompu7ng – Very large problems needing lots of CPU,
memory, etc. • High-‐Throughput Compu7ng
– Harnessing many idle resources • On-‐Demand Compu7ng
– Remote resources integrated with local computa7on
• Data-‐intensive Compu7ng – Using distributed data
• Collabora7ve Compu7ng – Support communica7on among mul7ple
par7es
Super-‐ computer
Cluster
Super-‐ computer Cluster
Mini-‐ computer
Worksta7on
Worksta7on Worksta7on