我们应该成为什么样的人才?- 转载 原文:为什么国内 IT 公司 leader 以上就不怎么写代码,而据说 Google 的 Jeff Dean 还写代码?到底哪种情况好呢? - GXSC 的回答 - 知乎 作者:
Why you should hire a junior for your next open position
amazing tools for you
Without feeling guilty about it
Let me tell you a story. I worked with a client in the early days of my career. It was a social media sentiment analysis platform when Twitter was still
Beyond the usual new wired/wireless network hardware support and the other routine churn in the big Linux networking subsystem, the Linux 6.8 kernel is bringing some key improvements to the core networking code that can yield up to a ~40% improvement for TCP performance when encountering many concurrent network connections.
SIEVE is Simpler than LRU: an Efficient Turn-Key Eviction Algorithm for Web Caches
🐱🚀【C#/.NET/.NET Core学习、工作、面试指南】记录、收集和总结C#/.NET/.NET Core基础知识、学习路线、开发实战、学习视频、文章、书籍、项目框架、社区组织、开发必备工具、常见面试题、面试须知、简历模板、以及自己在学习和工作中的一些微薄见解。希望能和大家一起学习,共同进步👊【让现在的自己不再迷茫✨,如果本知识库能为您提供帮助,别忘了给予支持哦(关注、点赞、分享)💖】。 - GitHub - YSGStudyHards/DotNetGuide: 🐱🚀【C#/.NET/.NET Core学习、工作、面试指南】记录、收集和总结C#/.NET/.NET Core基础知识、学习路线、开发实战、学习视频、文章、书籍、项目框架、社区组织、开发必备工具、常见面试题、面试须知、简历模板、以及自己在学习和工作中的一些微薄见解。希望能和大家一起学习,共同进步👊【让现在的自己不再迷茫✨,如果本知识库能为您提供帮助,别忘了给予支持哦(关注、点赞、分享)💖】。
We investigate the unusual way memory subsystem interacts with branch prediction and how this interaction shapes software performance.
Recently, on LinkedIn, I read a post about an engineer who was surprised that his new, optimized version of a parser was slower than the original. The optimization consisted of removing the branches, which are the source of all evil according to the common knowledge in the street, right? His new version was slower, and a benchmark opened his eyes.
We've all been there: the trains you're servicing for a customer suddenly brick themselves and the manufacturer claims that's because you...
We demonstrate a high-performance vendor-agnostic method for massively parallel solving of ensembles of ordinary differential equations (ODEs) and stochastic differential equations (SDEs) on GPUs. The method is integrated with a widely used differential equation solver library in a high-level language (Julia's DifferentialEquations.jl) and enables GPU acceleration without requiring code changes by the user. Our approach achieves state-of-the-art performance compared to hand-optimized CUDA-C++ kernels while performing 20--100$\times$ faster than the vectorizing map (vmap) approach implemented in JAX and PyTorch. Performance evaluation on NVIDIA, AMD, Intel, and Apple GPUs demonstrates performance portability and vendor-agnosticism. We show composability with MPI to enable distributed multi-GPU workflows. The implemented solvers are fully featured -- supporting event handling, automatic differentiation, and incorporation of datasets via the GPU's texture memory -- allowing scientists to take advantage of GPU acceleration on all major current architectures without changing their model code and without loss of performance. We distribute the software as an open-source library https://github.com/SciML/DiffEqGPU.jl
We'll break down the fundamentals of automated testing, software design that enhance testability, and explore testing tools and frameworks.
In the heat of an enterprise deal moment, it’s easy to think very short-term about the long-term costs of one-off specials and “small requirements.” There’s tremendous pressure to maximize the importance of a feature tweak to close this quarter’s big deal, and similar pressure to minimize both
In his book "Drive: The Surprising Truth About What Motivates Us," Daniel Pink talks about "motivation 3.0", which comes after basic needs are covered ("motivation 1.0") and carrots and sticks ("motivation 2.0"). There are three main components in Pink's theory: • Autonomy to be in control of our destiny — how do we work? Is there a dr...