commit d6bc7e610ab98da569362b0b64bb20dafee1fcb9 Author: Greg Kroah-Hartman Date: Wed Jul 11 16:03:52 2018 +0200 Linux 4.4.140 commit fecf1ae3e02705275c6e7fe875dd063576fd0d53 Author: Dan Carpenter Date: Tue Jun 5 12:36:30 2018 +0300 staging: comedi: quatech_daqp_cs: fix no-op loop daqp_ao_insn_write() commit 1376b0a2160319125c3a2822e8c09bd283cd8141 upstream. There is a '>' vs '<' typo so this loop is a no-op. Fixes: d35dcc89fc93 ("staging: comedi: quatech_daqp_cs: fix daqp_ao_insn_write()") Signed-off-by: Dan Carpenter Reviewed-by: Ian Abbott Signed-off-by: Greg Kroah-Hartman commit 8cff10da50b2f03819567b2ee342dffd9d0eb9eb Author: Jann Horn Date: Mon Jun 25 17:22:00 2018 +0200 netfilter: nf_log: don't hold nf_log_mutex during user access commit ce00bf07cc95a57cd20b208e02b3c2604e532ae8 upstream. The old code would indefinitely block other users of nf_log_mutex if a userspace access in proc_dostring() blocked e.g. due to a userfaultfd region. Fix it by moving proc_dostring() out of the locked region. This is a followup to commit 266d07cb1c9a ("netfilter: nf_log: fix sleeping function called from invalid context"), which changed this code from using rcu_read_lock() to taking nf_log_mutex. Fixes: 266d07cb1c9a ("netfilter: nf_log: fix sleeping function calle[...]") Signed-off-by: Jann Horn Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman commit 242dbd2b3df71a42e3bca5fbb2cf1eda161e078f Author: Tokunori Ikegami Date: Wed May 30 18:32:29 2018 +0900 mtd: cfi_cmdset_0002: Change erase functions to check chip good only commit 79ca484b613041ca223f74b34608bb6f5221724b upstream. Currently the functions use to check both chip ready and good. But the chip ready is not enough to check the operation status. So change this to check the chip good instead of this. About the retry functions to make sure the error handling remain it. Signed-off-by: Tokunori Ikegami Reviewed-by: Joakim Tjernlund Cc: Chris Packham Cc: Brian Norris Cc: David Woodhouse Cc: Boris Brezillon Cc: Marek Vasut Cc: Richard Weinberger Cc: Cyrille Pitchen Cc: linux-mtd@lists.infradead.org Cc: stable@vger.kernel.org Signed-off-by: Boris Brezillon Signed-off-by: Greg Kroah-Hartman commit 2e0a09dcb8e08987fd98c3eb298370762b803e6e Author: Tokunori Ikegami Date: Wed May 30 18:32:28 2018 +0900 mtd: cfi_cmdset_0002: Change erase functions to retry for error commit 45f75b8a919a4255f52df454f1ffdee0e42443b2 upstream. For the word write functions it is retried for error. But it is not implemented to retry for the erase functions. To make sure for the erase functions change to retry as same. This is needed to prevent the flash erase error caused only once. It was caused by the error case of chip_good() in the do_erase_oneblock(). Also it was confirmed on the MACRONIX flash device MX29GL512FHT2I-11G. But the error issue behavior is not able to reproduce at this moment. The flash controller is parallel Flash interface integrated on BCM53003. Signed-off-by: Tokunori Ikegami Reviewed-by: Joakim Tjernlund Cc: Chris Packham Cc: Brian Norris Cc: David Woodhouse Cc: Boris Brezillon Cc: Marek Vasut Cc: Richard Weinberger Cc: Cyrille Pitchen Cc: linux-mtd@lists.infradead.org Cc: stable@vger.kernel.org Signed-off-by: Boris Brezillon Signed-off-by: Greg Kroah-Hartman commit 100b8c9452cc763b2ab84ce35526eda802afb1e4 Author: Tokunori Ikegami Date: Wed May 30 18:32:27 2018 +0900 mtd: cfi_cmdset_0002: Change definition naming to retry write operation commit 85a82e28b023de9b259a86824afbd6ba07bd6475 upstream. The definition can be used for other program and erase operations also. So change the naming to MAX_RETRIES from MAX_WORD_RETRIES. Signed-off-by: Tokunori Ikegami Reviewed-by: Joakim Tjernlund Cc: Chris Packham Cc: Brian Norris Cc: David Woodhouse Cc: Boris Brezillon Cc: Marek Vasut Cc: Richard Weinberger Cc: Cyrille Pitchen Cc: linux-mtd@lists.infradead.org Cc: stable@vger.kernel.org Signed-off-by: Boris Brezillon Signed-off-by: Greg Kroah-Hartman commit 5be6e45d184c8a9dfa35626092b7a71e42ef5ee2 Author: Mikulas Patocka Date: Wed Nov 23 16:52:01 2016 -0500 dm bufio: don't take the lock in dm_bufio_shrink_count commit d12067f428c037b4575aaeb2be00847fc214c24a upstream. dm_bufio_shrink_count() is called from do_shrink_slab to find out how many freeable objects are there. The reported value doesn't have to be precise, so we don't need to take the dm-bufio lock. Suggested-by: David Rientjes Signed-off-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit bf4e6336ccbc57bb8898c6fadd904139dd9bd927 Author: Martin Kaiser Date: Mon Jun 18 22:41:03 2018 +0200 mtd: rawnand: mxc: set spare area size register explicitly commit 3f77f244d8ec28e3a0a81240ffac7d626390060c upstream. The v21 version of the NAND flash controller contains a Spare Area Size Register (SPAS) at offset 0x10. Its setting defaults to the maximum spare area size of 218 bytes. The size that is set in this register is used by the controller when it calculates the ECC bytes internally in hardware. Usually, this register is updated from settings in the IIM fuses when the system is booting from NAND flash. For other boot media, however, the SPAS register remains at the default setting, which may not work for the particular flash chip on the board. The same goes for flash chips whose configuration cannot be set in the IIM fuses (e.g. chips with 2k sector size and 128 bytes spare area size can't be configured in the IIM fuses on imx25 systems). Set the SPAS register explicitly during the preset operation. Derive the register value from mtd->oobsize that was detected during probe by decoding the flash chip's ID bytes. While at it, rename the define for the spare area register's offset to NFC_V21_RSLTSPARE_AREA. The register at offset 0x10 on v1 controllers is different from the register on v21 controllers. Fixes: d484018 ("mtd: mxc_nand: set NFC registers after reset") Cc: stable@vger.kernel.org Signed-off-by: Martin Kaiser Reviewed-by: Sascha Hauer Reviewed-by: Miquel Raynal Signed-off-by: Boris Brezillon Signed-off-by: Greg Kroah-Hartman commit 3ce44cad02d3987d71ee69eed1e3b9e59559417c Author: Mikulas Patocka Date: Wed Nov 23 17:04:00 2016 -0500 dm bufio: drop the lock when doing GFP_NOIO allocation commit 41c73a49df31151f4ff868f28fe4f129f113fa2c upstream. If the first allocation attempt using GFP_NOWAIT fails, drop the lock and retry using GFP_NOIO allocation (lock is dropped because the allocation can take some time). Note that we won't do GFP_NOIO allocation when we loop for the second time, because the lock shouldn't be dropped between __wait_for_free_buffer and __get_unclaimed_buffer. Signed-off-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit 7396cb30cbfb379fb9fe715f348f2effe8eb52b7 Author: Douglas Anderson Date: Thu Nov 17 11:24:20 2016 -0800 dm bufio: avoid sleeping while holding the dm_bufio lock commit 9ea61cac0b1ad0c09022f39fd97e9b99a2cfc2dc upstream. We've seen in-field reports showing _lots_ (18 in one case, 41 in another) of tasks all sitting there blocked on: mutex_lock+0x4c/0x68 dm_bufio_shrink_count+0x38/0x78 shrink_slab.part.54.constprop.65+0x100/0x464 shrink_zone+0xa8/0x198 In the two cases analyzed, we see one task that looks like this: Workqueue: kverityd verity_prefetch_io __switch_to+0x9c/0xa8 __schedule+0x440/0x6d8 schedule+0x94/0xb4 schedule_timeout+0x204/0x27c schedule_timeout_uninterruptible+0x44/0x50 wait_iff_congested+0x9c/0x1f0 shrink_inactive_list+0x3a0/0x4cc shrink_lruvec+0x418/0x5cc shrink_zone+0x88/0x198 try_to_free_pages+0x51c/0x588 __alloc_pages_nodemask+0x648/0xa88 __get_free_pages+0x34/0x7c alloc_buffer+0xa4/0x144 __bufio_new+0x84/0x278 dm_bufio_prefetch+0x9c/0x154 verity_prefetch_io+0xe8/0x10c process_one_work+0x240/0x424 worker_thread+0x2fc/0x424 kthread+0x10c/0x114 ...and that looks to be the one holding the mutex. The problem has been reproduced on fairly easily: 0. Be running Chrome OS w/ verity enabled on the root filesystem 1. Pick test patch: http://crosreview.com/412360 2. Install launchBalloons.sh and balloon.arm from http://crbug.com/468342 ...that's just a memory stress test app. 3. On a 4GB rk3399 machine, run nice ./launchBalloons.sh 4 900 100000 ...that tries to eat 4 * 900 MB of memory and keep accessing. 4. Login to the Chrome web browser and restore many tabs With that, I've seen printouts like: DOUG: long bufio 90758 ms ...and stack trace always show's we're in dm_bufio_prefetch(). The problem is that we try to allocate memory with GFP_NOIO while we're holding the dm_bufio lock. Instead we should be using GFP_NOWAIT. Using GFP_NOIO can cause us to sleep while holding the lock and that causes the above problems. The current behavior explained by David Rientjes: It will still try reclaim initially because __GFP_WAIT (or __GFP_KSWAPD_RECLAIM) is set by GFP_NOIO. This is the cause of contention on dm_bufio_lock() that the thread holds. You want to pass GFP_NOWAIT instead of GFP_NOIO to alloc_buffer() when holding a mutex that can be contended by a concurrent slab shrinker (if count_objects didn't use a trylock, this pattern would trivially deadlock). This change significantly increases responsiveness of the system while in this state. It makes a real difference because it unblocks kswapd. In the bug report analyzed, kswapd was hung: kswapd0 D ffffffc000204fd8 0 72 2 0x00000000 Call trace: [] __switch_to+0x9c/0xa8 [] __schedule+0x440/0x6d8 [] schedule+0x94/0xb4 [] schedule_preempt_disabled+0x28/0x44 [] __mutex_lock_slowpath+0x120/0x1ac [] mutex_lock+0x4c/0x68 [] dm_bufio_shrink_count+0x38/0x78 [] shrink_slab.part.54.constprop.65+0x100/0x464 [] shrink_zone+0xa8/0x198 [] balance_pgdat+0x328/0x508 [] kswapd+0x424/0x51c [] kthread+0x10c/0x114 [] ret_from_fork+0x10/0x40 By unblocking kswapd memory pressure should be reduced. Suggested-by: David Rientjes Reviewed-by: Guenter Roeck Signed-off-by: Douglas Anderson Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit aaf87537eb46628a8841d02b0fd4a81d1c06161e Author: Vlastimil Babka Date: Thu Jun 7 17:09:29 2018 -0700 mm, page_alloc: do not break __GFP_THISNODE by zonelist reset commit 7810e6781e0fcbca78b91cf65053f895bf59e85f upstream. In __alloc_pages_slowpath() we reset zonelist and preferred_zoneref for allocations that can ignore memory policies. The zonelist is obtained from current CPU's node. This is a problem for __GFP_THISNODE allocations that want to allocate on a different node, e.g. because the allocating thread has been migrated to a different CPU. This has been observed to break SLAB in our 4.4-based kernel, because there it relies on __GFP_THISNODE working as intended. If a slab page is put on wrong node's list, then further list manipulations may corrupt the list because page_to_nid() is used to determine which node's list_lock should be locked and thus we may take a wrong lock and race. Current SLAB implementation seems to be immune by luck thanks to commit 511e3a058812 ("mm/slab: make cache_grow() handle the page allocated on arbitrary node") but there may be others assuming that __GFP_THISNODE works as promised. We can fix it by simply removing the zonelist reset completely. There is actually no reason to reset it, because memory policies and cpusets don't affect the zonelist choice in the first place. This was different when commit 183f6371aac2 ("mm: ignore mempolicies when using ALLOC_NO_WATERMARK") introduced the code, as mempolicies provided their own restricted zonelists. We might consider this for 4.17 although I don't know if there's anything currently broken. SLAB is currently not affected, but in kernels older than 4.7 that don't yet have 511e3a058812 ("mm/slab: make cache_grow() handle the page allocated on arbitrary node") it is. That's at least 4.4 LTS. Older ones I'll have to check. So stable backports should be more important, but will have to be reviewed carefully, as the code went through many changes. BTW I think that also the ac->preferred_zoneref reset is currently useless if we don't also reset ac->nodemask from a mempolicy to NULL first (which we probably should for the OOM victims etc?), but I would leave that for a separate patch. Link: http://lkml.kernel.org/r/20180525130853.13915-1-vbabka@suse.cz Signed-off-by: Vlastimil Babka Fixes: 183f6371aac2 ("mm: ignore mempolicies when using ALLOC_NO_WATERMARK") Acked-by: Mel Gorman Cc: Michal Hocko Cc: David Rientjes Cc: Joonsoo Kim Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 7f9c65ea59709c5512d05d8c0d5c48be2f9ad478 Author: Brad Love Date: Tue Mar 6 14:15:34 2018 -0500 media: cx25840: Use subdev host data for PLL override commit 3ee9bc12342cf546313d300808ff47d7dbb8e7db upstream. The cx25840 driver currently configures 885, 887, and 888 using default divisors for each chip. This check to see if the cx23885 driver has passed the cx25840 a non-default clock rate for a specific chip. If a cx23885 board has left clk_freq at 0, the clock default values will be used to configure the PLLs. This patch only has effect on 888 boards who set clk_freq to 25M. Signed-off-by: Brad Love Signed-off-by: Mauro Carvalho Chehab Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 99b6d2c3bb7984b9ab5eac50120e0dde53235f2f Author: Tony Luck Date: Fri Jun 22 11:54:23 2018 +0200 x86/mce: Fix incorrect "Machine check from unknown source" message commit 40c36e2741d7fe1e66d6ec55477ba5fd19c9c5d2 upstream. Some injection testing resulted in the following console log: mce: [Hardware Error]: CPU 22: Machine Check Exception: f Bank 1: bd80000000100134 mce: [Hardware Error]: RIP 10: {pmem_do_bvec+0x11d/0x330 [nd_pmem]} mce: [Hardware Error]: TSC c51a63035d52 ADDR 3234bc4000 MISC 88 mce: [Hardware Error]: PROCESSOR 0:50654 TIME 1526502199 SOCKET 0 APIC 38 microcode 2000043 mce: [Hardware Error]: Run the above through 'mcelog --ascii' Kernel panic - not syncing: Machine check from unknown source This confused everybody because the first line quite clearly shows that we found a logged error in "Bank 1", while the last line says "unknown source". The problem is that the Linux code doesn't do the right thing for a local machine check that results in a fatal error. It turns out that we know very early in the handler whether the machine check is fatal. The call to mce_no_way_out() has checked all the banks for the CPU that took the local machine check. If it says we must crash, we can do so right away with the right messages. We do scan all the banks again. This means that we might initially not see a problem, but during the second scan find something fatal. If this happens we print a slightly different message (so I can see if it actually every happens). [ bp: Remove unneeded severity assignment. ] Signed-off-by: Tony Luck Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Cc: Ashok Raj Cc: Dan Williams Cc: Qiuxu Zhuo Cc: linux-edac Cc: stable@vger.kernel.org # 4.2 Link: http://lkml.kernel.org/r/52e049a497e86fd0b71c529651def8871c804df0.1527283897.git.tony.luck@intel.com Signed-off-by: Greg Kroah-Hartman commit e462d226fbb3fd36fc2425970b17108ad3b5bb9d Author: Yazen Ghannam Date: Sat Apr 30 14:33:57 2016 +0200 x86/mce: Detect local MCEs properly commit fead35c68926682c90c995f22b48f1c8d78865c1 upstream. Check the MCG_STATUS_LMCES bit on Intel to verify that current MCE is local. It is always local on AMD. Signed-off-by: Yazen Ghannam [ Massaged it a bit. Reflowed comments. Shut up -Wmaybe-uninitialized. ] Signed-off-by: Borislav Petkov Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Tony Luck Cc: linux-edac Link: http://lkml.kernel.org/r/1462019637-16474-8-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman commit ef111ea31575bdc50c0c914fe036a1d0ad0cae4e Author: Daniel Rosenberg Date: Mon Jul 2 16:59:37 2018 -0700 HID: debug: check length before copy_to_user() commit 717adfdaf14704fd3ec7fa2c04520c0723247eac upstream. If our length is greater than the size of the buffer, we overflow the buffer Cc: stable@vger.kernel.org Signed-off-by: Daniel Rosenberg Reviewed-by: Benjamin Tissoires Signed-off-by: Jiri Kosina Signed-off-by: Greg Kroah-Hartman commit c0b166a64eef92a410a50f71b12b1b63cc5f32db Author: Gustavo A. R. Silva Date: Fri Jun 29 17:08:44 2018 -0500 HID: hiddev: fix potential Spectre v1 commit 4f65245f2d178b9cba48350620d76faa4a098841 upstream. uref->field_index, uref->usage_index, finfo.field_index and cinfo.index can be indirectly controlled by user-space, hence leading to a potential exploitation of the Spectre variant 1 vulnerability. This issue was detected with the help of Smatch: drivers/hid/usbhid/hiddev.c:473 hiddev_ioctl_usage() warn: potential spectre issue 'report->field' (local cap) drivers/hid/usbhid/hiddev.c:477 hiddev_ioctl_usage() warn: potential spectre issue 'field->usage' (local cap) drivers/hid/usbhid/hiddev.c:757 hiddev_ioctl() warn: potential spectre issue 'report->field' (local cap) drivers/hid/usbhid/hiddev.c:801 hiddev_ioctl() warn: potential spectre issue 'hid->collection' (local cap) Fix this by sanitizing such structure fields before using them to index report->field, field->usage and hid->collection Notice that given that speculation windows are large, the policy is to kill the speculation on the first load and not worry if it can be completed with a dependent load/store [1]. [1] https://marc.info/?l=linux-kernel&m=152449131114778&w=2 Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva Signed-off-by: Jiri Kosina Signed-off-by: Greg Kroah-Hartman commit dc3f661542a18e2cf2a3f410aadb6593b095a9b8 Author: Jason Andryuk Date: Fri Jun 22 12:25:49 2018 -0400 HID: i2c-hid: Fix "incomplete report" noise commit ef6eaf27274c0351f7059163918f3795da13199c upstream. Commit ac75a041048b ("HID: i2c-hid: fix size check and type usage") started writing messages when the ret_size is <= 2 from i2c_master_recv. However, my device i2c-DLL07D1 returns 2 for a short period of time (~0.5s) after I stop moving the pointing stick or touchpad. It varies, but you get ~50 messages each time which spams the log hard. [ 95.925055] i2c_hid i2c-DLL07D1:01: i2c_hid_get_input: incomplete report (83/2) This has also been observed with a i2c-ALP0017. [ 1781.266353] i2c_hid i2c-ALP0017:00: i2c_hid_get_input: incomplete report (30/2) Only print the message when ret_size is totally invalid and less than 2 to cut down on the log spam. Fixes: ac75a041048b ("HID: i2c-hid: fix size check and type usage") Reported-by: John Smith Cc: stable@vger.kernel.org Signed-off-by: Jason Andryuk Signed-off-by: Jiri Kosina Signed-off-by: Greg Kroah-Hartman commit 9611051641837be394d2c2d7ba795dc279176fad Author: Jon Derrick Date: Mon Jul 2 18:45:18 2018 -0400 ext4: check superblock mapped prior to committing commit a17712c8e4be4fa5404d20e9cd3b2b21eae7bc56 upstream. This patch attempts to close a hole leading to a BUG seen with hot removals during writes [1]. A block device (NVME namespace in this test case) is formatted to EXT4 without partitions. It's mounted and write I/O is run to a file, then the device is hot removed from the slot. The superblock attempts to be written to the drive which is no longer present. The typical chain of events leading to the BUG: ext4_commit_super() __sync_dirty_buffer() submit_bh() submit_bh_wbc() BUG_ON(!buffer_mapped(bh)); This fix checks for the superblock's buffer head being mapped prior to syncing. [1] https://www.spinics.net/lists/linux-ext4/msg56527.html Signed-off-by: Jon Derrick Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit 9ccad874bedabcc2fde0c1ea44a9b9f8afed5194 Author: Theodore Ts'o Date: Sun Jun 17 18:11:20 2018 -0400 ext4: add more mount time checks of the superblock commit bfe0a5f47ada40d7984de67e59a7d3390b9b9ecc upstream. The kernel's ext4 mount-time checks were more permissive than e2fsprogs's libext2fs checks when opening a file system. The superblock is considered too insane for debugfs or e2fsck to operate on it, the kernel has no business trying to mount it. This will make file system fuzzing tools work harder, but the failure cases that they find will be more useful and be easier to evaluate. Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit ff6c96461be35381399466ad58f02b8d78ab480a Author: Theodore Ts'o Date: Sun Jun 17 00:41:14 2018 -0400 ext4: add more inode number paranoia checks commit c37e9e013469521d9adb932d17a1795c139b36db upstream. If there is a directory entry pointing to a system inode (such as a journal inode), complain and declare the file system to be corrupted. Also, if the superblock's first inode number field is too small, refuse to mount the file system. This addresses CVE-2018-10882. https://bugzilla.kernel.org/show_bug.cgi?id=200069 Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit b88fc699a023e0ef86f647c3d48a17d7cfff1f2a Author: Theodore Ts'o Date: Fri Jun 15 12:28:16 2018 -0400 ext4: clear i_data in ext4_inode_info when removing inline data commit 6e8ab72a812396996035a37e5ca4b3b99b5d214b upstream. When converting from an inode from storing the data in-line to a data block, ext4_destroy_inline_data_nolock() was only clearing the on-disk copy of the i_blocks[] array. It was not clearing copy of the i_blocks[] in ext4_inode_info, in i_data[], which is the copy actually used by ext4_map_blocks(). This didn't matter much if we are using extents, since the extents header would be invalid and thus the extents could would re-initialize the extents tree. But if we are using indirect blocks, the previous contents of the i_blocks array will be treated as block numbers, with potentially catastrophic results to the file system integrity and/or user data. This gets worse if the file system is using a 1k block size and s_first_data is zero, but even without this, the file system can get quite badly corrupted. This addresses CVE-2018-10881. https://bugzilla.kernel.org/show_bug.cgi?id=200015 Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit faf01457010eb68009fba45dcad2232138c34164 Author: Theodore Ts'o Date: Fri Jun 15 12:27:16 2018 -0400 ext4: include the illegal physical block in the bad map ext4_error msg commit bdbd6ce01a70f02e9373a584d0ae9538dcf0a121 upstream. Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit 353ebd3e98869b50ed47364d05acdf679c2c05c6 Author: Theodore Ts'o Date: Thu Jun 14 12:55:10 2018 -0400 ext4: verify the depth of extent tree in ext4_find_extent() commit bc890a60247171294acc0bd67d211fa4b88d40ba upstream. If there is a corupted file system where the claimed depth of the extent tree is -1, this can cause a massive buffer overrun leading to sadness. This addresses CVE-2018-10877. https://bugzilla.kernel.org/show_bug.cgi?id=199417 Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit db3b00e3f392e9f879f7fd202437e68f90f35765 Author: Theodore Ts'o Date: Thu Jun 14 00:58:00 2018 -0400 ext4: only look at the bg_flags field if it is valid commit 8844618d8aa7a9973e7b527d038a2a589665002c upstream. The bg_flags field in the block group descripts is only valid if the uninit_bg or metadata_csum feature is enabled. We were not consistently looking at this field; fix this. Also block group #0 must never have uninitialized allocation bitmaps, or need to be zeroed, since that's where the root inode, and other special inodes are set up. Check for these conditions and mark the file system as corrupted if they are detected. This addresses CVE-2018-10876. https://bugzilla.kernel.org/show_bug.cgi?id=199403 Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit afa9c75025bd1e24ccdc56fa331e865b626769e6 Author: Theodore Ts'o Date: Wed Jun 13 23:00:48 2018 -0400 ext4: always check block group bounds in ext4_init_block_bitmap() commit 819b23f1c501b17b9694325471789e6b5cc2d0d2 upstream. Regardless of whether the flex_bg feature is set, we should always check to make sure the bits we are setting in the block bitmap are within the block group bounds. https://bugzilla.kernel.org/show_bug.cgi?id=199865 Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit b7d29dc8fe8d23243d3d87109099bdc34a684712 Author: Theodore Ts'o Date: Wed Jun 13 23:08:26 2018 -0400 ext4: make sure bitmaps and the inode table don't overlap with bg descriptors commit 77260807d1170a8cf35dbb06e07461a655f67eee upstream. It's really bad when the allocation bitmaps and the inode table overlap with the block group descriptors, since it causes random corruption of the bg descriptors. So we really want to head those off at the pass. https://bugzilla.kernel.org/show_bug.cgi?id=199865 Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit 2cd33a53177ce739fe5f68052b2a737f1c40b425 Author: Theodore Ts'o Date: Sat Jun 16 20:21:45 2018 -0400 jbd2: don't mark block as modified if the handle is out of credits commit e09463f220ca9a1a1ecfda84fcda658f99a1f12a upstream. Do not set the b_modified flag in block's journal head should not until after we're sure that jbd2_journal_dirty_metadat() will not abort with an error due to there not being enough space reserved in the jbd2 handle. Otherwise, future attempts to modify the buffer may lead a large number of spurious errors and warnings. This addresses CVE-2018-10883. https://bugzilla.kernel.org/show_bug.cgi?id=200071 Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman commit 6ee6f1787e69db4b8b155450ca8694c145c2517c Author: Paulo Alcantara Date: Thu Jul 5 13:46:34 2018 -0300 cifs: Fix infinite loop when using hard mount option commit 7ffbe65578b44fafdef577a360eb0583929f7c6e upstream. For every request we send, whether it is SMB1 or SMB2+, we attempt to reconnect tcon (cifs_reconnect_tcon or smb2_reconnect) before carrying out the request. So, while server->tcpStatus != CifsNeedReconnect, we wait for the reconnection to succeed on wait_event_interruptible_timeout(). If it returns, that means that either the condition was evaluated to true, or timeout elapsed, or it was interrupted by a signal. Since we're not handling the case where the process woke up due to a received signal (-ERESTARTSYS), the next call to wait_event_interruptible_timeout() will _always_ fail and we end up looping forever inside either cifs_reconnect_tcon() or smb2_reconnect(). Here's an example of how to trigger that: $ mount.cifs //foo/share /mnt/test -o username=foo,password=foo,vers=1.0,hard (break connection to server before executing bellow cmd) $ stat -f /mnt/test & sleep 140 [1] 2511 $ ps -aux -q 2511 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2511 0.0 0.0 12892 1008 pts/0 S 12:24 0:00 stat -f /mnt/test $ kill -9 2511 (wait for a while; process is stuck in the kernel) $ ps -aux -q 2511 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2511 83.2 0.0 12892 1008 pts/0 R 12:24 30:01 stat -f /mnt/test By using 'hard' mount point means that cifs.ko will keep retrying indefinitely, however we must allow the process to be killed otherwise it would hang the system. Signed-off-by: Paulo Alcantara Cc: stable@vger.kernel.org Reviewed-by: Aurelien Aptel Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman commit df0fe72e2e1f18da3d045f0211b2c833d22578ff Author: Lars Ellenberg Date: Mon Jun 25 11:39:52 2018 +0200 drbd: fix access after free commit 64dafbc9530c10300acffc57fae3269d95fa8f93 upstream. We have struct drbd_requests { ... struct bio *private_bio; ... } to hold a bio clone for local submission. On local IO completion, we put that bio, and in case we want to use the result later, we overload that member to hold the ERR_PTR() of the completion result, Which, before v4.3, used to be the passed in "int error", so we could first bio_put(), then assign. v4.3-rc1~100^2~21 4246a0b63bd8 block: add a bi_error field to struct bio changed that: bio_put(req->private_bio); - req->private_bio = ERR_PTR(error); + req->private_bio = ERR_PTR(bio->bi_error); Which introduces an access after free, because it was non obvious that req->private_bio == bio. Impact of that was mostly unnoticable, because we only use that value in a multiple-failure case, and even then map any "unexpected" error code to EIO, so worst case we could potentially mask a more specific error with EIO in a multiple failure case. Unless the pointed to memory region was unmapped, as is the case with CONFIG_DEBUG_PAGEALLOC, in which case this results in BUG: unable to handle kernel paging request v4.13-rc1~70^2~75 4e4cbee93d56 block: switch bios to blk_status_t changes it further to bio_put(req->private_bio); req->private_bio = ERR_PTR(blk_status_to_errno(bio->bi_status)); And blk_status_to_errno() now contains a WARN_ON_ONCE() for unexpected values, which catches this "sometimes", if the memory has been reused quickly enough for other things. Should also go into stable since 4.3, with the trivial change around 4.13. Cc: stable@vger.kernel.org Fixes: 4246a0b63bd8 block: add a bi_error field to struct bio Reported-by: Sarah Newman Signed-off-by: Lars Ellenberg Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman commit 8e0817deeb968cfd0e176ed204754b712af7ecd2 Author: Christian Borntraeger Date: Thu Jun 21 14:49:38 2018 +0200 s390: Correct register corruption in critical section cleanup commit 891f6a726cacbb87e5b06076693ffab53bd378d7 upstream. In the critical section cleanup we must not mess with r1. For march=z9 or older, larl + ex (instead of exrl) are used with r1 as a temporary register. This can clobber r1 in several interrupt handlers. Fix this by using r11 as a temp register. r11 is being saved by all callers of cleanup_critical. Fixes: 6dd85fbb87 ("s390: move expoline assembler macros to a header") Cc: stable@vger.kernel.org #v4.16 Reported-by: Oliver Kurz Reported-by: Petr Tesařík Signed-off-by: Christian Borntraeger Reviewed-by: Hendrik Brueckner Signed-off-by: Martin Schwidefsky Signed-off-by: Greg Kroah-Hartman commit 9a737329c7c4a341009b7398164db8fa8e5358f0 Author: Jann Horn Date: Mon Jun 25 16:25:44 2018 +0200 scsi: sg: mitigate read/write abuse commit 26b5b874aff5659a7e26e5b1997e3df2c41fa7fd upstream. As Al Viro noted in commit 128394eff343 ("sg_write()/bsg_write() is not fit to be called under KERNEL_DS"), sg improperly accesses userspace memory outside the provided buffer, permitting kernel memory corruption via splice(). But it doesn't just do it on ->write(), also on ->read(). As a band-aid, make sure that the ->read() and ->write() handlers can not be called in weird contexts (kernel context or credentials different from file opener), like for ib_safe_file_access(). If someone needs to use these interfaces from different security contexts, a new interface should be written that goes through the ->ioctl() handler. I've mostly copypasted ib_safe_file_access() over as sg_safe_file_access() because I couldn't find a good common header - please tell me if you know a better way. [mkp: s/_safe_/_check_/] Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Cc: Signed-off-by: Jann Horn Acked-by: Douglas Gilbert Signed-off-by: Martin K. Petersen Signed-off-by: Greg Kroah-Hartman commit 02a8a256f5bea9fc7156cea4d3c7f10958c0355d Author: Changbin Du Date: Wed Jan 31 23:48:49 2018 +0800 tracing: Fix missing return symbol in function_graph output commit 1fe4293f4b8de75824935f8d8e9a99c7fc6873da upstream. The function_graph tracer does not show the interrupt return marker for the leaf entry. On leaf entries, we see an unbalanced interrupt marker (the interrupt was entered, but nevern left). Before: 1) | SyS_write() { 1) | __fdget_pos() { 1) 0.061 us | __fget_light(); 1) 0.289 us | } 1) | vfs_write() { 1) 0.049 us | rw_verify_area(); 1) + 15.424 us | __vfs_write(); 1) ==========> | 1) 6.003 us | smp_apic_timer_interrupt(); 1) 0.055 us | __fsnotify_parent(); 1) 0.073 us | fsnotify(); 1) + 23.665 us | } 1) + 24.501 us | } After: 0) | SyS_write() { 0) | __fdget_pos() { 0) 0.052 us | __fget_light(); 0) 0.328 us | } 0) | vfs_write() { 0) 0.057 us | rw_verify_area(); 0) | __vfs_write() { 0) ==========> | 0) 8.548 us | smp_apic_timer_interrupt(); 0) <========== | 0) + 36.507 us | } /* __vfs_write */ 0) 0.049 us | __fsnotify_parent(); 0) 0.066 us | fsnotify(); 0) + 50.064 us | } 0) + 50.952 us | } Link: http://lkml.kernel.org/r/1517413729-20411-1-git-send-email-changbin.du@intel.com Cc: stable@vger.kernel.org Fixes: f8b755ac8e0cc ("tracing/function-graph-tracer: Output arrows signal on hardirq call/return") Signed-off-by: Changbin Du Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Greg Kroah-Hartman commit 9d45ae0158172943ba789cf39babaefaa67eaba2 Author: Cannon Matthews Date: Tue Jul 3 17:02:43 2018 -0700 mm: hugetlb: yield when prepping struct pages commit 520495fe96d74e05db585fc748351e0504d8f40d upstream. When booting with very large numbers of gigantic (i.e. 1G) pages, the operations in the loop of gather_bootmem_prealloc, and specifically prep_compound_gigantic_page, takes a very long time, and can cause a softlockup if enough pages are requested at boot. For example booting with 3844 1G pages requires prepping (set_compound_head, init the count) over 1 billion 4K tail pages, which takes considerable time. Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to prevent this lockup. Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and no softlockup is reported, and the hugepages are reported as successfully setup. Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatthews@google.com Signed-off-by: Cannon Matthews Reviewed-by: Andrew Morton Reviewed-by: Mike Kravetz Acked-by: Michal Hocko Cc: Andres Lagar-Cavilla Cc: Peter Feiner Cc: Greg Thelen Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 69a044d59c37221f97ae0308995573960b646252 Author: Richard Weinberger Date: Mon May 28 22:04:32 2018 +0200 ubi: fastmap: Correctly handle interrupted erasures in EBA commit 781932375ffc6411713ee0926ccae8596ed0261c upstream. Fastmap cannot track the LEB unmap operation, therefore it can happen that after an interrupted erasure the mapping still looks good from Fastmap's point of view, while reading from the PEB will cause an ECC error and confuses the upper layer. Instead of teaching users of UBI how to deal with that, we read back the VID header and check for errors. If the PEB is empty or shows ECC errors we fixup the mapping and schedule the PEB for erasure. Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Cc: Reported-by: martin bayern Signed-off-by: Richard Weinberger Signed-off-by: Greg Kroah-Hartman commit 45f47e3a14933afa5261110dbeb580fbf6f6df1d Author: Sean Nyekjaer Date: Tue May 22 19:45:09 2018 +0200 ARM: dts: imx6q: Use correct SDMA script for SPI5 core commit df07101e1c4a29e820df02f9989a066988b160e6 upstream. According to the reference manual the shp_2_mcu / mcu_2_shp scripts must be used for devices connected through the SPBA. This fixes an issue we saw with DMA transfers. Sometimes the SPI controller RX FIFO was not empty after a DMA transfer and the driver got stuck in the next PIO transfer when it read one word more than expected. commit dd4b487b32a35 ("ARM: dts: imx6: Use correct SDMA script for SPI cores") is fixing the same issue but only for SPI1 - 4. Fixes: 677940258dd8e ("ARM: dts: imx6q: enable dma for ecspi5") Signed-off-by: Sean Nyekjaer Reviewed-by: Fabio Estevam Signed-off-by: Shawn Guo Signed-off-by: Greg Kroah-Hartman commit 753465123663c7c96b515befe4a199a2104582d3 Author: Taehee Yoo Date: Mon Jun 11 22:16:33 2018 +0900 netfilter: nf_tables: use WARN_ON_ONCE instead of BUG_ON in nft_do_chain() commit adc972c5b88829d38ede08b1069718661c7330ae upstream. When depth of chain is bigger than NFT_JUMP_STACK_SIZE, the nft_do_chain crashes. But there is no need to crash hard here. Suggested-by: Florian Westphal Signed-off-by: Taehee Yoo Acked-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman commit 1e40d09a5575f6a2db841473baf329dc17bbe3ff Author: Keith Busch Date: Thu Sep 14 13:54:39 2017 -0400 nvme-pci: initialize queue memory before interrupts commit 161b8be2bd6abad250d4b3f674bdd5480f15beeb upstream. A spurious interrupt before the nvme driver has initialized the completion queue may inadvertently cause the driver to believe it has a completion to process. This may result in a NULL dereference since the nvmeq's tags are not set at this point. The patch initializes the host's CQ memory so that a spurious interrupt isn't mistaken for a real completion. Signed-off-by: Keith Busch Reviewed-by: Johannes Thumshirn Signed-off-by: Christoph Hellwig Signed-off-by: Jens Axboe [bwh: Backported to 4.4: adjust context] Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 5ac07564b47fdffbc293f9b129eb5af394b3d31f Author: Masami Hiramatsu Date: Wed Mar 29 14:00:25 2017 +0900 kprobes/x86: Do not modify singlestep buffer while resuming commit 804dec5bda9b4fcdab5f67fe61db4a0498af5221 upstream. Do not modify singlestep execution buffer (kprobe.ainsn.insn) while resuming from single-stepping, instead, modifies the buffer to add a jump back instruction at preparing buffer. Signed-off-by: Masami Hiramatsu Cc: Ananth N Mavinakayanahalli Cc: Andrey Ryabinin Cc: Anil S Keshavamurthy Cc: Borislav Petkov Cc: Brian Gerst Cc: David S . Miller Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ye Xiaolong Link: http://lkml.kernel.org/r/149076361560.22469.1610155860343077495.stgit@devbox Signed-off-by: Ingo Molnar Reviewed-by: "Steven Rostedt (VMware)" Signed-off-by: Alexey Makhalov Signed-off-by: Greg Kroah-Hartman commit 21e9341ed96024aaead1a12c77e9869ad81f8b96 Author: Ben Hutchings Date: Tue Jun 19 18:47:52 2018 +0100 ipv4: Fix error return value in fib_convert_metrics() The validation code modified by commit 5b5e7a0de2bb ("net: metrics: add proper netlink validation") is organised differently in older kernel versions. The fib_convert_metrics() function that is modified in the backports to 4.4 and 4.9 needs to returns an error code, not a success flag. Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 2aa465dbe267a09311560fcf4592db7689270f90 Author: Wolfram Sang Date: Tue Apr 18 20:38:35 2017 +0200 i2c: rcar: fix resume by always initializing registers before transfer commit ae481cc139658e89eb3ea671dd00b67bd87f01a3 upstream. Resume failed because of uninitialized registers. Instead of adding a resume callback, we simply initialize registers before every transfer. This lightweight change is more robust and will keep us safe if we ever need support for power domains or dynamic frequency changes. Signed-off-by: Wolfram Sang Acked-by: Kuninori Morimoto Signed-off-by: Wolfram Sang Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 8d33a08bd4c623fbf0e1be97725c59c8fa868ddb Author: Vasanthakumar Thiagarajan Date: Mon Sep 26 21:56:24 2016 +0300 ath10k: fix rfc1042 header retrieval in QCA4019 with eth decap mode commit 2f38c3c01de945234d23dd163e3528ccb413066d upstream. Chipset from QCA99X0 onwards (QCA99X0, QCA9984, QCA4019 & future) rx_hdr_status is not padded to align in 4-byte boundary. Define a new hw_params field to handle different alignment behaviour between different hw. This patch fixes improper retrieval of rfc1042 header with QCA4019. This patch along with "ath10k: Properly remove padding from the start of rx payload" will fix traffic failure in ethernet decap mode for QCA4019. Signed-off-by: Vasanthakumar Thiagarajan Signed-off-by: Kalle Valo [bwh: This just adds the part that was left out of the previous backport, commit b88fb9ea475a.] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 99580265784e108e0f294564db80f689039b2e07 Author: Dave Hansen Date: Tue Dec 22 14:52:38 2015 -0800 x86/boot: Fix early command-line parsing when matching at end commit 02afeaae9843733a39cd9b11053748b2d1dc5ae7 upstream. The x86 early command line parsing in cmdline_find_option_bool() is buggy. If it matches a specified 'option' all the way to the end of the command-line, it will consider it a match. For instance, cmdline = "foo"; cmdline_find_option_bool(cmdline, "fool"); will return 1. This is particularly annoying since we have actual FPU options like "noxsave" and "noxsaves" So, command-line "foo bar noxsave" will match *BOTH* a "noxsave" and "noxsaves". (This turns out not to be an actual problem because "noxsave" implies "noxsaves", but it's still confusing.) To fix this, we simplify the code and stop tracking 'len'. 'len' was trying to indicate either the NULL terminator *OR* the end of a non-NULL-terminated command line at 'COMMAND_LINE_SIZE'. But, each of the three states is *already* checking 'cmdline' for a NULL terminator. We _only_ need to check if we have overrun 'COMMAND_LINE_SIZE', and that we can do without keeping 'len' around. Also add some commends to clarify what is going on. Signed-off-by: Dave Hansen Signed-off-by: Borislav Petkov Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: fenghua.yu@intel.com Cc: yu-cheng.yu@intel.com Link: http://lkml.kernel.org/r/20151222225238.9AEB560C@viggo.jf.intel.com Signed-off-by: Ingo Molnar Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 80fbfb1ce6fdfdb265d4ab3b0b87a7c3cf8f6aff Author: Tetsuo Handa Date: Sat May 26 09:53:14 2018 +0900 n_tty: Access echo_* variables carefully. commit ebec3f8f5271139df618ebdf8427e24ba102ba94 upstream. syzbot is reporting stalls at __process_echoes() [1]. This is because since ldata->echo_commit < ldata->echo_tail becomes true for some reason, the discard loop is serving as almost infinite loop. This patch tries to avoid falling into ldata->echo_commit < ldata->echo_tail situation by making access to echo_* variables more carefully. Since reset_buffer_flags() is called without output_lock held, it should not touch echo_* variables. And omit a call to reset_buffer_flags() from n_tty_open() by using vzalloc(). Since add_echo_byte() is called without output_lock held, it needs memory barrier between storing into echo_buf[] and incrementing echo_head counter. echo_buf() needs corresponding memory barrier before reading echo_buf[]. Lack of handling the possibility of not-yet-stored multi-byte operation might be the reason of falling into ldata->echo_commit < ldata->echo_tail situation, for if I do WARN_ON(ldata->echo_commit == tail + 1) prior to echo_buf(ldata, tail + 1), the WARN_ON() fires. Also, explicitly masking with buffer for the former "while" loop, and use ldata->echo_commit > tail for the latter "while" loop. [1] https://syzkaller.appspot.com/bug?id=17f23b094cd80df750e5b0f8982c521ee6bcbf40 Signed-off-by: Tetsuo Handa Reported-by: syzbot Cc: Peter Hurley Cc: stable Signed-off-by: Greg Kroah-Hartman commit 58fcaeb30e27df934c3cd5f13733292d2b455fa0 Author: Laura Abbott Date: Mon Jun 11 11:06:53 2018 -0700 staging: android: ion: Return an ERR_PTR in ion_map_kernel commit 0a2bc00341dcfcc793c0dbf4f8d43adf60458b05 upstream. The expected return value from ion_map_kernel is an ERR_PTR. The error path for a vmalloc failure currently just returns NULL, triggering a warning in ion_buffer_kmap_get. Encode the vmalloc failure as an ERR_PTR. Reported-by: syzbot+55b1d9f811650de944c6@syzkaller.appspotmail.com Signed-off-by: Laura Abbott Cc: stable Signed-off-by: Greg Kroah-Hartman commit a1f75c3f3a79147441649ea96d3a0b58e13ded31 Author: Tetsuo Handa Date: Sat May 26 09:53:13 2018 +0900 n_tty: Fix stall at n_tty_receive_char_special(). commit 3d63b7e4ae0dc5e02d28ddd2fa1f945defc68d81 upstream. syzbot is reporting stalls at n_tty_receive_char_special() [1]. This is because comparison is not working as expected since ldata->read_head can change at any moment. Mitigate this by explicitly masking with buffer size when checking condition for "while" loops. [1] https://syzkaller.appspot.com/bug?id=3d7481a346958d9469bebbeb0537d5f056bdd6e8 Signed-off-by: Tetsuo Handa Reported-by: syzbot Fixes: bc5a5e3f45d04784 ("n_tty: Don't wrap input buffer indices at buffer size") Cc: stable Cc: Peter Hurley Signed-off-by: Greg Kroah-Hartman commit 216c5fab831e394d4a15e31c20fb5fa46ba24cd8 Author: Karoly Pados Date: Sat Jun 9 13:26:08 2018 +0200 USB: serial: cp210x: add Silicon Labs IDs for Windows Update commit 2f839823382748664b643daa73f41ee0cc01ced6 upstream. Silicon Labs defines alternative VID/PID pairs for some chips that when used will automatically install drivers for Windows users without manual intervention. Unfortunately, these IDs are not recognized by the Linux module, so using these IDs improves user experience on one platform but degrades it on Linux. This patch addresses this problem. Signed-off-by: Karoly Pados Cc: stable Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman commit 3357fbc733cd84523c09c35c1924daad7e389dbf Author: Johan Hovold Date: Mon Jun 18 10:24:03 2018 +0200 USB: serial: cp210x: add CESINEL device ids commit 24160628a34af962ac99f2f58e547ac3c4cbd26f upstream. Add device ids for CESINEL products. Reported-by: Carlos Barcala Lara Cc: stable Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman commit 6f8b3fd2a4726648c9a57f49df8358573414774d Author: Houston Yaroschoff Date: Mon Jun 11 12:39:09 2018 +0200 usb: cdc_acm: Add quirk for Uniden UBC125 scanner commit 4a762569a2722b8a48066c7bacf0e1dc67d17fa1 upstream. Uniden UBC125 radio scanner has USB interface which fails to work with cdc_acm driver: usb 1-1.5: new full-speed USB device number 4 using xhci_hcd cdc_acm 1-1.5:1.0: Zero length descriptor references cdc_acm: probe of 1-1.5:1.0 failed with error -22 Adding the NO_UNION_NORMAL quirk for the device fixes the issue: usb 1-4: new full-speed USB device number 15 using xhci_hcd usb 1-4: New USB device found, idVendor=1965, idProduct=0018 usb 1-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 1-4: Product: UBC125XLT usb 1-4: Manufacturer: Uniden Corp. usb 1-4: SerialNumber: 0001 cdc_acm 1-4:1.0: ttyACM0: USB ACM device `lsusb -v` of the device: Bus 001 Device 015: ID 1965:0018 Uniden Corporation Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 2 Communications bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x1965 Uniden Corporation idProduct 0x0018 bcdDevice 0.01 iManufacturer 1 Uniden Corp. iProduct 2 UBC125XLT iSerial 3 0001 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 48 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 2 Communications bInterfaceSubClass 2 Abstract (modem) bInterfaceProtocol 0 None iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x87 EP 7 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 10 CDC Data bInterfaceSubClass 0 Unused bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Device Status: 0x0000 (Bus Powered) Signed-off-by: Houston Yaroschoff Cc: stable Acked-by: Oliver Neukum Signed-off-by: Greg Kroah-Hartman