Tags

Tags give the ability to mark specific points in history as being important
  • v4.9.228

    45b83c18 · Linux 4.9.228 ·
    This is the 4.9.228 stable release
    
  • v4.4.228

    ddb4a7b3 · Linux 4.4.228 ·
    This is the 4.4.228 stable release
    
  • libata-5.8-2020-06-19

    libata-5.8-2020-06-19
    
  • Ubuntu-5.4.0-39.43

    Ubuntu-5.4.0-39.43
    
  • v5.7.4

    901e2c6c · Linux 5.7.4 ·
    This is the 5.7.4 stable release
    
  • v5.7.3

    264e468f · Linux 5.7.3 ·
    This is the 5.7.3 stable release
    
  • v5.6.19

    61aba373 · Linux 5.6.19 ·
    This is the 5.6.19 stable release
    
  • v5.4.47

    fd8cd8ac · Linux 5.4.47 ·
    This is the 5.4.47 stable release
    
  • v5.8-rc1

    b3a9e3b9 · Linux 5.8-rc1 ·
    Linux 5.8-rc1
    
  • media/v5.8-2

    media updates for v5.8-rc1
    
  • x86-entry-2020-06-12

    The X86 entry, exception and interrupt code rework
    
    This all started about 6 month ago with the attempt to move the Posix CPU
    timer heavy lifting out of the timer interrupt code and just have lockless
    quick checks in that code path. Trivial 5 patches.
    
    This unearthed an inconsistency in the KVM handling of task work and the
    review requested to move all of this into generic code so other
    architectures can share.
    
    Valid request and solved with another 25 patches but those unearthed
    inconsistencies vs. RCU and instrumentation.
    
    Digging into this made it obvious that there are quite some inconsistencies
    vs. instrumentation in general. The int3 text poke handling in particular
    was completely unprotected and with the batched update of trace events even
    more likely to expose to endless int3 recursion.
    
    In parallel the RCU implications of instrumenting fragile entry code came
    up in several discussions.
    
    The conclusion of the X86 maintainer team was to go all the way and make
    the protection against any form of instrumentation of fragile and dangerous
    code pathes enforcable and verifiable by tooling.
    
    A first batch of preparatory work hit mainline with commit d5f744f9a2ac.
    
    The (almost) full solution introduced a new code section '.noinstr.text'
    into which all code which needs to be protected from instrumentation of all
    sorts goes into. Any call into instrumentable code out of this section has
    to be annotated. objtool has support to validate this. Kprobes now excludes
    this section fully which also prevents BPF from fiddling with it and all
    'noinstr' annotated functions also keep ftrace off. The section, kprobes
    and objtool changes are already merged.
    
    The major changes coming with this are:
    
        - Preparatory cleanups
    
        - Annotating of relevant functions to move them into the noinstr.text
          section or enforcing inlining by marking them __always_inline so the
          compiler cannot misplace or instrument them.
    
        - Splitting and simplifying the idtentry macro maze so that it is now
          clearly separated into simple exception entries and the more
          interesting ones which use interrupt stacks and have the paranoid
          handling vs. CR3 and GS.
    
        - Move quite some of the low level ASM functionality into C code:
    
           - enter_from and exit to user space handling. The ASM code now calls
             into C after doing the really necessary ASM handling and the return
    	 path goes back out without bells and whistels in ASM.
    
           - exception entry/exit got the equivivalent treatment
    
           - move all IRQ tracepoints from ASM to C so they can be placed as
             appropriate which is especially important for the int3 recursion
             issue.
    
        - Consolidate the declaration and definition of entry points between 32
          and 64 bit. They share a common header and macros now.
    
        - Remove the extra device interrupt entry maze and just use the regular
          exception entry code.
    
        - All ASM entry points except NMI are now generated from the shared header
          file and the corresponding macros in the 32 and 64 bit entry ASM.
    
        - The C code entry points are consolidated as well with the help of
          DEFINE_IDTENTRY*() macros. This allows to ensure at one central point
          that all corresponding entry points share the same semantics. The
          actual function body for most entry points is in an instrumentable
          and sane state.
    
          There are special macros for the more sensitive entry points,
          e.g. INT3 and of course the nasty paranoid #NMI, #MCE, #DB and #DF.
          They allow to put the whole entry instrumentation and RCU handling
          into safe places instead of the previous pray that it is correct
          approach.
    
        - The INT3 text poke handling is now completely isolated and the
          recursion issue banned. Aside of the entry rework this required other
          isolation work, e.g. the ability to force inline bsearch.
    
        - Prevent #DB on fragile entry code, entry relevant memory and disable
          it on NMI, #MC entry, which allowed to get rid of the nested #DB IST
          stack shifting hackery.
    
        - A few other cleanups and enhancements which have been made possible
          through this and already merged changes, e.g. consolidating and
          further restricting the IDT code so the IDT table becomes RO after
          init which removes yet another popular attack vector
    
        - About 680 lines of ASM maze are gone.
    
    There are a few open issues:
    
       - An escape out of the noinstr section in the MCE handler which needs
         some more thought but under the aspect that MCE is a complete
         trainwreck by design and the propability to survive it is low, this was
         not high on the priority list.
    
       - Paravirtualization
    
         When PV is enabled then objtool complains about a bunch of indirect
         calls out of the noinstr section. There are a few straight forward
         ways to fix this, but the other issues vs. general correctness were
         more pressing than parawitz.
    
       - KVM
    
         KVM is inconsistent as well. Patches have been posted, but they have
         not yet been commented on or picked up by the KVM folks.
    
       - IDLE
    
         Pretty much the same problems can be found in the low level idle code
         especially the parts where RCU stopped watching. This was beyond the
         scope of the more obvious and exposable problems and is on the todo
         list.
    
    The lesson learned from this brain melting exercise to morph the evolved
    code base into something which can be validated and understood is that once
    again the violation of the most important engineering principle
    "correctness first" has caused quite a few people to spend valuable time on
    problems which could have been avoided in the first place. The "features
    first" tinkering mindset really has to stop.
    
    With that I want to say thanks to everyone involved in contributing to this
    effort. Special thanks go to the following people (alphabetical order):
    
       Alexandre Chartre
       Andy Lutomirski
       Borislav Petkov
       Brian Gerst
       Frederic Weisbecker
       Josh Poimboeuf
       Juergen Gross
       Lai Jiangshan
       Macro Elver
       Paolo Bonzini
       Paul McKenney
       Peter Zijlstra
       Vitaly Kuznetsov
       Will Deacon
    
  • ras-core-2020-06-12

    RAS updates from Borislav Petkov:
    
      * Unmap a whole guest page if an MCE is encountered in it to avoid
        follow-on MCEs leading to the guest crashing, by Tony Luck.
    
        This change collided with the entry changes and the merge resolution
        would have been rather unpleasant. To avoid that the entry branch was
        merged in before applying this. The resulting code did not change
        over the rebase.
    
      * AMD MCE error thresholding machinery cleanup and hotplug sanitization, by
        Thomas Gleixner.
    
      * Change the MCE notifiers to denote whether they have handled the error
        and not break the chain early by returning NOTIFY_STOP, thus giving the
        opportunity for the later handlers in the chain to see it. By Tony Luck.
    
      * Add AMD family 0x17, models 0x60-6f support, by Alexander Monakov.
    
      * Last but not least, the usual round of fixes and improvements.
    
  • locking-kcsan-2020-06-11

    The Kernel Concurrency Sanitizer (KCSAN)
    
    KCSAN is a dynamic race detector, which relies on compile-time
    instrumentation, and uses a watchpoint-based sampling approach to detect
    races.
    
    The feature was under development for quite some time and has already found
    legitimate bugs.
    
    Unfortunately it comes with a limitation, which was only understood late in
    the development cycle:
    
      It requires an up to date CLANG-11 compiler
    
    CLANG-11 is not yet released (scheduled for June), but it's the only
    compiler today which handles the kernel requirements and especially the
    annotations of functions to exclude them from KCSAN instrumentation
    correctly.
    
    These annotations really need to work so that low level entry code and
    especially int3 text poke handling can be completely isolated.
    
    A detailed discussion of the requirements and compiler issues can be found
    here:
    
      https://lore.kernel.org/lkml/CANpmjNMTsY_8241bS7=XAfqvZHFLrVEkv_uM4aDUWE_kh3Rvbw@mail.gmail.com/
    
    We came to the conclusion that trying to work around compiler limitations
    and bugs again would end up in a major trainwreck, so requiring a working
    compiler seemed to be the best choice.
    
    For Continous Integration purposes the compiler restriction is manageable
    and that's where most xxSAN reports come from.
    
    For a change this limitation might make GCC people actually look at their
    bugs. Some issues with CSAN in GCC are 7 years old and one has been 'fixed'
    3 years ago with a half baken solution which 'solved' the reported issue
    but not the underlying problem.
    
    The KCSAN developers also ponder to use a GCC plugin to become independent,
    but that's not something which will show up in a few days.
    
    Blocking KCSAN until wide spread compiler support is available is not a
    really good alternative because the continuous growth of lockless
    optimizations in the kernel demands proper tooling support.
    
  • locking-urgent-2020-06-11

    Peter Zijlstras rework of atomics and fallbacks. This solves two problems:
    
      1) Compilers uninline small atomic_* static inline functions which can
         expose them to instrumentation.
    
      2) The instrumentation of atomic primitives was done at the architecture
         level while composites or fallbacks were provided at the generic level.
         As a result there are no uninstrumented variants of the fallbacks.
    
    Both issues were in the way of fully isolating fragile entry code pathes
    and especially the text poke int3 handler which is prone to an endless
    recursion problem when anything in that code path is about to be
    instrumented. This was always a problem, but got elevated due to the new
    batch mode updates of tracing.
    
    The solution is to mark the functions __always_inline and to flip the
    fallback and instrumentation so the non-instrumented variants are at the
    architecture level and the instrumentation is done in generic code.
    
    The latter introduces another fallback variant which will go away once all
    architectures have been moved over to arch_atomic_*.