图书简介
This book constitutes revised selected papers from the workshops held at the 29th International Conference on Parallel and Distributed Computing, Euro-Par 2023, which took place in Limassol, Cyprus, during August 28–September 1, 2023. The 42 full papers presented in this book together with 11 symposium papers and 14 demo/poster papers were carefully reviewed and selected from 55 submissions. The papers cover covering all aspects of parallel and distributed processing, ranging from theory to practice, from small to the largest parallel and distributed systems and infrastructures, from fundamental computational problems to applications, from architecture, compiler, language and interface design and implementation, to tools, support infrastructures, and application performance aspects. LNCS 14351: First International Workshop on Scalable Compute Continuum (WSCC 2023).First International Workshop on Tools for Data Locality, Power and Performance (TDLPP 2023).First International Workshop on Urgent Analytics for Distributed Computing (QuickPar 2023).21st International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms (HETEROPAR 2023). LNCS 14352:Second International Workshop on Resource AWareness of Systems and Society (RAW 2023).Third International Workshop on Asynchronous Many-Task systems for Exascale (AMTE 2023).Third International Workshop on Performance and Energy-efficiency in Concurrent and Distributed Systems (PECS 2023) First Minisymposium on Applications and Benefits of UPMEM commercial Massively Parallel Processing-In-Memory Platform (ABUMPIMP 2023).First Minsymposium on Adaptive High Performance Input / Output Systems (ADAPIO 2023).
The 2nd International Workshop on Resource AWareness of Systems and Society (RAW 2023).- Performance and energy aware training of a deep neural network in a multi-GPU environment with power capping.- GPPRMon: GPU Runtime Memory Performance and Power Monitoring Tool.- Towards Resource-Efficient DNN Deployment for Traffic Object Recognition: From Edge to Fog.- The Implementation of Battery Charging Strategy for IoT Nodes.- subMFL: Compatible subModel Generation for Federated Learning in Device Heterogeneous Environment.- Towards a Simulation as a Service Platform for the Cloud-to-Things Continuum.- Cormas: The Software for Participatory Modelling and its Application for Managing Natural Resources in Senegal.- Asynchronous Many-Task systems for Exascale (AMTE).- Malleable APGAS Programs and their Support in Batch Job Schedulers.- Task-Level Checkpointing for Nested Fork-Join Programs using Work Stealing.- Making Uintah Performance Portable for Department of Energy Exascale Testbeds.- Benchmarking the Parallel 1D Heat Equation Solver in Chapel, Charm , C , HPX, Go, Julia, Python, Rust, Swift, and Java.- PECS 2023 - 2-page report.- Parallel auto-scheduling of counting queries in machine learning applications on HPC systems.- Energy Efficiency Impact of Processing in Memory: A Comprehensive Review of Workloads on the UPMEM Architecture.- Enhancing Supercomputer Performance with Malleable Job Scheduling Strategies.- A Performance Modelling-driven Approach to Hardware Resource Scaling.- Applications and Benefits of UPMEM commercial Massively parallel Processing-In-Memory (PIM) Platform (ABUMPIMP) Minisymposium.- Adaptive HPC Input/Output Systems.- Dynamic Allocations in a Hierarchical Parallel Context.- Designing A Sustainable Serverless Graph Processing Tool on the Computing Continuum.- Diorthotis: A Parallel Batch Evaluator for Programming Assignments.- Experiences and Lessons Learned from PHYSICS: A Framework for Cloud Development with FaaS.- Improved IoT Application Placement in Fog Computing through Postponement.- High-Performance Distributed Computing with Smartphones.- Blockchain-based Decentralized Authority for Complex Organizational Structures Management.- Transparent Remote OpenMP Offloading based on MPI.- DAPHNE Runtime: Harnessing Parallelism for Integrated Data Analysis Pipelines.- Exploring Factors Impacting Data Offloading Performance in Edge and Cloud Environments.- HEAppE Middleware: From desktop to HPC.- Towards Energy-Aware Machine Learning in Geo-Distributed IoT Settings.- OpenCUBE: Building an Open Source Cloud Blueprint with EPI Systems.- BDDC Preconditioning in the Microcard Project.- Online Job Failure Prediction in an HPC system.- Exploring Mapping Strategies for Co-allocated HPC Applications.- A polynomial-time algorithm for detecting potentially unbounded places in a Petri net-based concurrent system.- Data Assimilation with Ocean Models: A Case Study of Reduced Precision and Machine Learning in the Gulf of Mexico.- Massively parallel EEG algorithms for pre-exascale architectures.- Online Job Failure Prediction in an HPC System.- Transitioning to Smart Sustainable Cities Based on Cutting-Edge Technological Improvements.- Algorithm Selection of MPI Collectives Considering System Utilization.- Service Management in Dynamic Edge Environments.- Path Plan Optimisation for UAV Assisted Data Collection in Large Areas.- Efficiently Distributed Federated Learning.
Trade Policy 买家须知
- 关于产品:
- ● 正版保障:本网站隶属于中国国际图书贸易集团公司,确保所有图书都是100%正版。
- ● 环保纸张:进口图书大多使用的都是环保轻型张,颜色偏黄,重量比较轻。
- ● 毛边版:即书翻页的地方,故意做成了参差不齐的样子,一般为精装版,更具收藏价值。
关于退换货:
- 由于预订产品的特殊性,采购订单正式发订后,买方不得无故取消全部或部分产品的订购。
- 由于进口图书的特殊性,发生以下情况的,请直接拒收货物,由快递返回:
- ● 外包装破损/发错货/少发货/图书外观破损/图书配件不全(例如:光盘等)
并请在工作日通过电话400-008-1110联系我们。
- 签收后,如发生以下情况,请在签收后的5个工作日内联系客服办理退换货:
- ● 缺页/错页/错印/脱线
关于发货时间:
- 一般情况下:
- ●【现货】 下单后48小时内由北京(库房)发出快递。
- ●【预订】【预售】下单后国外发货,到货时间预计5-8周左右,店铺默认中通快递,如需顺丰快递邮费到付。
- ● 需要开具发票的客户,发货时间可能在上述基础上再延后1-2个工作日(紧急发票需求,请联系010-68433105/3213);
- ● 如遇其他特殊原因,对发货时间有影响的,我们会第一时间在网站公告,敬请留意。
关于到货时间:
- 由于进口图书入境入库后,都是委托第三方快递发货,所以我们只能保证在规定时间内发出,但无法为您保证确切的到货时间。
- ● 主要城市一般2-4天
- ● 偏远地区一般4-7天
关于接听咨询电话的时间:
- 010-68433105/3213正常接听咨询电话的时间为:周一至周五上午8:30~下午5:00,周六、日及法定节假日休息,将无法接听来电,敬请谅解。
- 其它时间您也可以通过邮件联系我们:customer@readgo.cn,工作日会优先处理。
关于快递:
- ● 已付款订单:主要由中通、宅急送负责派送,订单进度查询请拨打010-68433105/3213。
本书暂无推荐
本书暂无推荐