Volker Lendecke [Thu, 5 Dec 2024 15:50:12 +0000 (16:50 +0100)]
smbd: Make can_delete_directory_fsp() look cleaner in strace
I'm not sure, but it might be that we don't have a full fd coming into
can_delete_directory_fsp() without O_PATH. We open a real fd for
readdir() in all cases, which we can use for sure in openat &
friends. Use that as dirfsp for openat.
Signed-off-by: Volker Lendecke <vl@samba.org> Reviewed-by: Pavel Filipenský <pfilipensky@samba.org>
Volker Lendecke [Wed, 4 Dec 2024 15:30:03 +0000 (16:30 +0100)]
printing: Remove a few obsolete openat_pathref_fsp() calls
driver_convert_unix calls filename_convert_dirfsp, which these days
fills smb_fname->fsp. So openat_pathref_fsp() will immediately return
success as it finds smb_fname->fsp != NULL.
Signed-off-by: Volker Lendecke <vl@samba.org> Reviewed-by: Pavel Filipenský <pfilipensky@samba.org>
Volker Lendecke [Wed, 4 Dec 2024 07:54:19 +0000 (08:54 +0100)]
smbd: Simplify smb_q_posix_acl()
Ensure it's called with a valid fsp. In the pathinfo case, use
get_posix_fsp() in the caller, in the fileinfo case the client has
sent us the fid. A client-visible fid is always a fsa fsp.
Signed-off-by: Volker Lendecke <vl@samba.org> Reviewed-by: Pavel Filipenský <pfilipensky@samba.org>
Volker Lendecke [Fri, 29 Nov 2024 12:06:03 +0000 (13:06 +0100)]
libcli: Make handling implicit_owner_rights bit easier to read
The first time I came across this I missed the "FALL_THROUGH" and had
to look closely at what happens. I had expected
IMPLICIT_OWNER_READ_CONTROL_AND_WRITE_DAC_RIGHTS to grant two rights,
which to me is now more obvious. It was correct before, but to me this
is now more obvious. YMMV.
Signed-off-by: Volker Lendecke <vl@samba.org> Reviewed-by: Pavel Filipenský <pfilipensky@samba.org>
Pavel Filipenský [Wed, 11 Dec 2024 21:33:17 +0000 (22:33 +0100)]
s3:open.c: Fix a typo
Signed-off-by: Pavel Filipenský <pfilipensky@samba.org> Reviewed-by: Ralph Boehme <slow@samba.org>
Autobuild-User(master): Pavel Filipensky <pfilipensky@samba.org>
Autobuild-Date(master): Tue Dec 17 11:23:50 UTC 2024 on atb-devel-224
Jones Syue [Thu, 26 Sep 2024 09:17:14 +0000 (17:17 +0800)]
s3:vfs_crossrename: add back checking for errno ENOENT
strace gives a clue: samba try to remove 'file.txt' in the dst folder but
actually it is not existed yet, and got an errno = ENOENT,
renameat(32, "file.txt", 31, "file.txt") = -1 EXDEV (Invalid cross-device link)
unlinkat(31, "file.txt", 0) = -1 ENOENT (No such file or directory)
Commit 5c18f074be92 ("s3: VFS: crossrename. Use real dirfsp for
SMB_VFS_RENAMEAT()") seems unintentionally removed errno ENOENT checking,
so add it back could address 1st issue.
Signed-off-by: Pavel Filipenský <pfilipensky@samba.org> Reviewed-by: Andreas Schneider <asn@samba.org>
Autobuild-User(master): Pavel Filipensky <pfilipensky@samba.org>
Autobuild-Date(master): Mon Dec 16 19:32:32 UTC 2024 on atb-devel-224
Martin Schwenke [Wed, 28 Jun 2023 04:01:44 +0000 (14:01 +1000)]
ctdb-scripts: Support storing statd-callout state in cluster filesystem
CTDB_STATD_CALLOUT_SHARED_STORAGE is a new configuration variable
indicating where statd-callout should store its NFS client locking
data. See the update to ctdb-script.options(5) for details.
This adds back functionality that was removed in commit 12cc82623150ca4a83482f1b7165401cbdecd3de. The commit message doesn't
say why this was changed but it was most likely due to a cluster
filesystem hanging at inopportune times. Hence, this is re-added as a
non-default option. There are 2 justifications for re-adding it:
* The existing method (persistent_db) relies on dequeuing data during
the monitor event, which loses any queued data on node crash.
* NFS-Ganesha writes NFSv4 client locking data to a cluster
filesystem, by default. Something similar might as well exist for
NFSv3.
Note that this could create the files for sm-notify in add-client.
However, this would require an alternate implementation of
send_notifies() (or a change to the implementation for persistent_db
too). It seems better to leave add-client lightweight and do the work
in notify, since add-client is a more frequent operation.
Unconditionally create the state directory on startup. This is
currently implicitly created for persistent_db when the queue
directory is created. However, it isn't created anywhere else for
shared_dir, so do it in a common place.
In test mode, the shared storage location has a prefix added so files
are created within the test environment.
Signed-off-by: Martin Schwenke <mschwenke@ddn.com> Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Martin Schwenke [Tue, 4 Jun 2024 23:32:21 +0000 (09:32 +1000)]
ctdb-scripts: Fix impending SM_NOTIFY versus record deletion race
SM_NOTIFYs are sent before client records are deleted. Theoretically,
this means new records resulting from lock reclaim can be deleted.
This doesn't actually happen at the moment because any new "records"
resulting from lock reclaim are entered into the queue directory and
only dequeued to the database during a later monitor event. Since a
monitor event can't collide with an ipreallocated event, no records
can be dequeeued into the database during the ipreallocated event, so
they can't be deleted by delete_records().
However, a subsequent commit will add direct writing of records into a
shared cluster filesystem directory. This means that add-client
events will cause records to be added directly to that directory so,
without a fix, the race will be able to occur.
So, delete records before sending SM_NOTIFYs. In theory, the script
could be killed before all SM_NOTIFYs are successfully sent, resulting
in loss of locks. However, given the overall lack of error checking,
there are other, more likely problems.
Signed-off-by: Martin Schwenke <mschwenke@ddn.com> Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Martin Schwenke [Tue, 27 Jun 2023 03:37:56 +0000 (13:37 +1000)]
ctdb-scripts: Factor out some statd-callout functions
This captures all of the persistent database (currently ctdb.tdb)
implementation-specific details in functions. Alternate
implementations can now be easily added.
Signed-off-by: Martin Schwenke <mschwenke@ddn.com> Reviewed-by: Amitay Isaacs <amitay@gmail.com>
Martin Schwenke [Wed, 5 Jul 2023 22:20:37 +0000 (08:20 +1000)]
ctdb-scripts: Use CTDB_NFS_SHARED_STATE_DIR in nfs-ganesha-callout
Rename CTDB_NFS_STATE_MNT to CTDB_NFS_SHARED_STATE_DIR. It doesn't
have to be a mount but can be any directory in a cluster filesystem.
CTDB_NFS_SHARED_STATE_DIR will soon be used in statd_callout_helper,
so the variable name might as well be better.
With this change, it will still only be used by nfs-ganesha-callout,
which isn't yet supported (i.e. it still lives in doc/examples). The
rest of the comments below refer to behaviour changes in that script.
CTDB_NFS_SHARED_STATE_DIR is now mandatory when GPFS is used. This is
much saner that choosing the first GPFS filesystem - if the state
directory changes then connection metadata can be lost.
Drop CTDB_NFS_STATE_FS_TYPE. The filesystem type is now determined
from CTDB_NFS_SHARED_STATE_DIR and it is now checked against supported
filesystems. This will catch the case when the filesystem for the
specified directory has not been mounted and the filesystem for the
mountpoint (e.g. ext4) is not a supported filesystem for shared state.
A side-effect is that the filesystem containing
CTDB_NFS_SHARED_STATE_DIR must be mounted when nfs-ganesha-callout is
first run.
While touching this file, my shfmt pre-commit hook wants to insert a
trailing ;; into a case statement. Let's sneak that in here too.
Signed-off-by: Martin Schwenke <mschwenke@ddn.com> Reviewed-by: Amitay Isaacs <amitay@gmail.com>
s4:rpc_server/netlogon: fix dcesrv_netr_LogonSamLogon_base_call() for ServerAuthenticateKerberos()
Signed-off-by: Stefan Metzmacher <metze@samba.org> Reviewed-by: Andreas Schneider <asn@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Thu Dec 12 15:00:10 UTC 2024 on atb-devel-224
Signed-off-by: Stefan Metzmacher <metze@samba.org> Reviewed-by: Andreas Schneider <asn@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Thu Dec 12 07:22:29 UTC 2024 on atb-devel-224
s3:rpc_server: make use of dcesrv_assoc_group_common_destructor()
We need to detach dcesrv_iface_state from dcesrv_assoc_group,
if dcesrv_assoc_group is free'ed first.
Typically this doesn't happen, but it does when
rpc_worker_connection_terminated explicitly calls
talloc_unlink(conn, conn->assoc_group)
and dcesrv_iface_state_store_conn() is used.
But we better do it in all assoc_group destructors.
==381007==ERROR: AddressSanitizer: heap-use-after-free on address 0x50d000004f80 at pc 0x7f15fc12e0ac bp 0x7ffe43267780 sp 0x7ffe43267778
READ of size 8 at 0x50d000004f80 thread T0
#0 0x7f15fc12e0ab in dcesrv_iface_state_destructor ../../librpc/rpc/dcesrv_handles.c:166
#1 0x7f15fc0f7d76 in _tc_free_internal ../../lib/talloc/talloc.c:1158
#2 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#3 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#4 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#5 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#6 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#7 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#8 0x7f15fc0f924c in _talloc_free_internal ../../lib/talloc/talloc.c:1248
#9 0x7f15fc0f924c in _talloc_free ../../lib/talloc/talloc.c:1792
#10 0x7f15fadac024 in ncacn_terminate_connection ../../source3/rpc_server/rpc_server.c:263
#11 0x7f15fadac024 in dcesrv_transport_terminate_connection ../../source3/rpc_server/rpc_server.c:251
#12 0x7f15fc11e5ef in dcesrv_terminate_connection ../../librpc/rpc/dcesrv_core.c:2968
#13 0x7f15fc125446 in dcesrv_read_fragment_done ../../librpc/rpc/dcesrv_core.c:3196
#14 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#15 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#16 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#17 0x7f15fb4f69a1 in _tevent_req_nterror ../../lib/util/tevent_ntstatus.c:46
#18 0x7f15fabda2f4 in dcerpc_read_ncacn_packet_done ../../librpc/rpc/dcerpc_util.c:612
#19 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#20 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#21 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#22 0x7f15fbff4228 in tstream_readv_pdu_readv_done ../../lib/tsocket/tsocket_helpers.c:313
#23 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#24 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#25 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#26 0x7f15fbff1800 in tstream_readv_done ../../lib/tsocket/tsocket.c:593
#27 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#28 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#29 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#30 0x7f15fadbc1a3 in tstream_npa_readv_msg_mode_handler ../../libcli/named_pipe_auth/npa_tstream.c:697
#31 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#32 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#33 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#34 0x7f15fbff4228 in tstream_readv_pdu_readv_done ../../lib/tsocket/tsocket_helpers.c:313
#35 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#36 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#37 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#38 0x7f15fbff1800 in tstream_readv_done ../../lib/tsocket/tsocket.c:593
#39 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#40 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#41 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#42 0x7f15fbff9691 in tstream_bsd_readv_handler ../../lib/tsocket/tsocket_bsd.c:2080
#43 0x7f15fbff6f85 in tstream_bsd_fde_handler ../../lib/tsocket/tsocket_bsd.c:1764
#44 0x7f15fb7d9ac1 in tevent_common_invoke_fd_handler ../../lib/tevent/tevent_fd.c:174
#45 0x7f15fb7ef185 in epoll_event_loop ../../lib/tevent/tevent_epoll.c:696
#46 0x7f15fb7ef185 in epoll_event_loop_once ../../lib/tevent/tevent_epoll.c:926
#47 0x7f15fb7e77b8 in std_event_loop_once ../../lib/tevent/tevent_standard.c:110
#48 0x7f15fb7d7549 in _tevent_loop_once ../../lib/tevent/tevent.c:820
#49 0x7f15fc936b7c in rpc_worker_main ../../source3/rpc_server/rpc_worker.c:1249
#50 0x5632ae1e1ec3 in main ../../source3/rpc_server/rpcd_lsad.c:132
#51 0x7f15f7c2a2ad in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#52 0x7f15f7c2a378 in __libc_start_main_impl ../csu/libc-start.c:360
#53 0x5632ae162e64 in _start ../sysdeps/x86_64/start.S:115
0x50d000004f80 is located 112 bytes inside of 136-byte region [0x50d000004f10,0x50d000004f98)
freed by thread T0 here:
#0 0x7f15fcefb418 in free ../../../../libsanitizer/asan/asan_malloc_linux.cpp:52
#1 0x7f15fc0f857d in _tc_free_internal ../../lib/talloc/talloc.c:1222
#2 0x7f15fc0f8d0f in _talloc_free_internal ../../lib/talloc/talloc.c:1248
#3 0x7f15fc0f8d0f in talloc_unlink ../../lib/talloc/talloc.c:1473
#4 0x7f15fc934580 in rpc_worker_connection_terminated ../../source3/rpc_server/rpc_worker.c:143
#5 0x7f15fc9310bd in dcesrv_connection_destructor ../../source3/rpc_server/rpc_worker.c:175
#6 0x7f15fc0f7d76 in _tc_free_internal ../../lib/talloc/talloc.c:1158
#7 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#8 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#9 0x7f15fc0f924c in _talloc_free_internal ../../lib/talloc/talloc.c:1248
#10 0x7f15fc0f924c in _talloc_free ../../lib/talloc/talloc.c:1792
#11 0x7f15fadac024 in ncacn_terminate_connection ../../source3/rpc_server/rpc_server.c:263
#12 0x7f15fadac024 in dcesrv_transport_terminate_connection ../../source3/rpc_server/rpc_server.c:251
#13 0x7f15fc11e5ef in dcesrv_terminate_connection ../../librpc/rpc/dcesrv_core.c:2968
#14 0x7f15fc125446 in dcesrv_read_fragment_done ../../librpc/rpc/dcesrv_core.c:3196
#15 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#16 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#17 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#18 0x7f15fb4f69a1 in _tevent_req_nterror ../../lib/util/tevent_ntstatus.c:46
#19 0x7f15fabda2f4 in dcerpc_read_ncacn_packet_done ../../librpc/rpc/dcerpc_util.c:612
#20 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#21 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#22 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#23 0x7f15fbff4228 in tstream_readv_pdu_readv_done ../../lib/tsocket/tsocket_helpers.c:313
#24 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#25 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#26 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#27 0x7f15fbff1800 in tstream_readv_done ../../lib/tsocket/tsocket.c:593
#28 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#29 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#30 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#31 0x7f15fadbc1a3 in tstream_npa_readv_msg_mode_handler ../../libcli/named_pipe_auth/npa_tstream.c:697
#32 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#33 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
previously allocated by thread T0 here:
#0 0x7f15fcefc777 in malloc ../../../../libsanitizer/asan/asan_malloc_linux.cpp:69
#1 0x7f15fc0fbc57 in __talloc_with_prefix ../../lib/talloc/talloc.c:783
#2 0x7f15fc0fd8cf in __talloc ../../lib/talloc/talloc.c:825
#3 0x7f15fc0fd8cf in _talloc_named_const ../../lib/talloc/talloc.c:982
#4 0x7f15fc0fd8cf in _talloc_zero ../../lib/talloc/talloc.c:2421
#5 0x7f15fc93156e in rpc_worker_assoc_group_new ../../source3/rpc_server/rpc_worker.c:681
#6 0x7f15fc93156e in rpc_worker_assoc_group_find ../../source3/rpc_server/rpc_worker.c:730
#7 0x7f15fc120a18 in dcesrv_bind ../../librpc/rpc/dcesrv_core.c:1158
#8 0x7f15fc120a18 in dcesrv_process_ncacn_packet ../../librpc/rpc/dcesrv_core.c:2324
#9 0x7f15fc120a18 in dcesrv_loop_next_packet ../../librpc/rpc/dcesrv_core.c:3222
#10 0x7f15fc933722 in rpc_worker_new_client ../../source3/rpc_server/rpc_worker.c:489
#11 0x7f15fc933722 in rpc_worker_new_client_filter ../../source3/rpc_server/rpc_worker.c:558
#12 0x7f15fbef95ca in messaging_dispatch_waiters ../../source3/lib/messages.c:1343
#13 0x7f15fbefb589 in messaging_dispatch_rec ../../source3/lib/messages.c:1371
#14 0x7f15fbefb589 in messaging_recv_cb ../../source3/lib/messages.c:431
#15 0x7f15faddba9e in msg_dgm_ref_recv ../../lib/messaging/messages_dgm_ref.c:144
#16 0x7f15fadd6cc3 in messaging_dgm_recv ../../lib/messaging/messages_dgm.c:1426
#17 0x7f15fadd7618 in messaging_dgm_read_handler ../../lib/messaging/messages_dgm.c:1316
#18 0x7f15fb7d9ac1 in tevent_common_invoke_fd_handler ../../lib/tevent/tevent_fd.c:174
#19 0x7f15fb7ef185 in epoll_event_loop ../../lib/tevent/tevent_epoll.c:696
#20 0x7f15fb7ef185 in epoll_event_loop_once ../../lib/tevent/tevent_epoll.c:926
#21 0x7f15fb7e77b8 in std_event_loop_once ../../lib/tevent/tevent_standard.c:110
#22 0x7f15fb7d7549 in _tevent_loop_once ../../lib/tevent/tevent.c:820
#23 0x7f15fc936b7c in rpc_worker_main ../../source3/rpc_server/rpc_worker.c:1249
#24 0x5632ae1e1ec3 in main ../../source3/rpc_server/rpcd_lsad.c:132
#25 0x7f15f7c2a2ad in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
We need to detach dcesrv_iface_state from dcesrv_assoc_group,
if dcesrv_assoc_group is free'ed first.
==381007==ERROR: AddressSanitizer: heap-use-after-free on address 0x50d000004f80 at pc 0x7f15fc12e0ac bp 0x7ffe43267780 sp 0x7ffe43267778
READ of size 8 at 0x50d000004f80 thread T0
#0 0x7f15fc12e0ab in dcesrv_iface_state_destructor ../../librpc/rpc/dcesrv_handles.c:166
#1 0x7f15fc0f7d76 in _tc_free_internal ../../lib/talloc/talloc.c:1158
#2 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#3 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#4 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#5 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#6 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#7 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#8 0x7f15fc0f924c in _talloc_free_internal ../../lib/talloc/talloc.c:1248
#9 0x7f15fc0f924c in _talloc_free ../../lib/talloc/talloc.c:1792
#10 0x7f15fadac024 in ncacn_terminate_connection ../../source3/rpc_server/rpc_server.c:263
#11 0x7f15fadac024 in dcesrv_transport_terminate_connection ../../source3/rpc_server/rpc_server.c:251
#12 0x7f15fc11e5ef in dcesrv_terminate_connection ../../librpc/rpc/dcesrv_core.c:2968
#13 0x7f15fc125446 in dcesrv_read_fragment_done ../../librpc/rpc/dcesrv_core.c:3196
#14 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#15 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#16 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#17 0x7f15fb4f69a1 in _tevent_req_nterror ../../lib/util/tevent_ntstatus.c:46
#18 0x7f15fabda2f4 in dcerpc_read_ncacn_packet_done ../../librpc/rpc/dcerpc_util.c:612
#19 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#20 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#21 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#22 0x7f15fbff4228 in tstream_readv_pdu_readv_done ../../lib/tsocket/tsocket_helpers.c:313
#23 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#24 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#25 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#26 0x7f15fbff1800 in tstream_readv_done ../../lib/tsocket/tsocket.c:593
#27 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#28 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#29 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#30 0x7f15fadbc1a3 in tstream_npa_readv_msg_mode_handler ../../libcli/named_pipe_auth/npa_tstream.c:697
#31 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#32 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#33 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#34 0x7f15fbff4228 in tstream_readv_pdu_readv_done ../../lib/tsocket/tsocket_helpers.c:313
#35 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#36 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#37 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#38 0x7f15fbff1800 in tstream_readv_done ../../lib/tsocket/tsocket.c:593
#39 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#40 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#41 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#42 0x7f15fbff9691 in tstream_bsd_readv_handler ../../lib/tsocket/tsocket_bsd.c:2080
#43 0x7f15fbff6f85 in tstream_bsd_fde_handler ../../lib/tsocket/tsocket_bsd.c:1764
#44 0x7f15fb7d9ac1 in tevent_common_invoke_fd_handler ../../lib/tevent/tevent_fd.c:174
#45 0x7f15fb7ef185 in epoll_event_loop ../../lib/tevent/tevent_epoll.c:696
#46 0x7f15fb7ef185 in epoll_event_loop_once ../../lib/tevent/tevent_epoll.c:926
#47 0x7f15fb7e77b8 in std_event_loop_once ../../lib/tevent/tevent_standard.c:110
#48 0x7f15fb7d7549 in _tevent_loop_once ../../lib/tevent/tevent.c:820
#49 0x7f15fc936b7c in rpc_worker_main ../../source3/rpc_server/rpc_worker.c:1249
#50 0x5632ae1e1ec3 in main ../../source3/rpc_server/rpcd_lsad.c:132
#51 0x7f15f7c2a2ad in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#52 0x7f15f7c2a378 in __libc_start_main_impl ../csu/libc-start.c:360
#53 0x5632ae162e64 in _start ../sysdeps/x86_64/start.S:115
0x50d000004f80 is located 112 bytes inside of 136-byte region [0x50d000004f10,0x50d000004f98)
freed by thread T0 here:
#0 0x7f15fcefb418 in free ../../../../libsanitizer/asan/asan_malloc_linux.cpp:52
#1 0x7f15fc0f857d in _tc_free_internal ../../lib/talloc/talloc.c:1222
#2 0x7f15fc0f8d0f in _talloc_free_internal ../../lib/talloc/talloc.c:1248
#3 0x7f15fc0f8d0f in talloc_unlink ../../lib/talloc/talloc.c:1473
#4 0x7f15fc934580 in rpc_worker_connection_terminated ../../source3/rpc_server/rpc_worker.c:143
#5 0x7f15fc9310bd in dcesrv_connection_destructor ../../source3/rpc_server/rpc_worker.c:175
#6 0x7f15fc0f7d76 in _tc_free_internal ../../lib/talloc/talloc.c:1158
#7 0x7f15fc0f7acd in _tc_free_children_internal ../../lib/talloc/talloc.c:1669
#8 0x7f15fc0f7acd in _tc_free_internal ../../lib/talloc/talloc.c:1184
#9 0x7f15fc0f924c in _talloc_free_internal ../../lib/talloc/talloc.c:1248
#10 0x7f15fc0f924c in _talloc_free ../../lib/talloc/talloc.c:1792
#11 0x7f15fadac024 in ncacn_terminate_connection ../../source3/rpc_server/rpc_server.c:263
#12 0x7f15fadac024 in dcesrv_transport_terminate_connection ../../source3/rpc_server/rpc_server.c:251
#13 0x7f15fc11e5ef in dcesrv_terminate_connection ../../librpc/rpc/dcesrv_core.c:2968
#14 0x7f15fc125446 in dcesrv_read_fragment_done ../../librpc/rpc/dcesrv_core.c:3196
#15 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#16 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#17 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#18 0x7f15fb4f69a1 in _tevent_req_nterror ../../lib/util/tevent_ntstatus.c:46
#19 0x7f15fabda2f4 in dcerpc_read_ncacn_packet_done ../../librpc/rpc/dcerpc_util.c:612
#20 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#21 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#22 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#23 0x7f15fbff4228 in tstream_readv_pdu_readv_done ../../lib/tsocket/tsocket_helpers.c:313
#24 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#25 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#26 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#27 0x7f15fbff1800 in tstream_readv_done ../../lib/tsocket/tsocket.c:593
#28 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#29 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
#30 0x7f15fb7dcdb7 in _tevent_req_error ../../lib/tevent/tevent_req.c:252
#31 0x7f15fadbc1a3 in tstream_npa_readv_msg_mode_handler ../../libcli/named_pipe_auth/npa_tstream.c:697
#32 0x7f15fb7dcae5 in _tevent_req_notify_callback ../../lib/tevent/tevent_req.c:177
#33 0x7f15fb7dcd1c in tevent_req_finish ../../lib/tevent/tevent_req.c:234
previously allocated by thread T0 here:
#0 0x7f15fcefc777 in malloc ../../../../libsanitizer/asan/asan_malloc_linux.cpp:69
#1 0x7f15fc0fbc57 in __talloc_with_prefix ../../lib/talloc/talloc.c:783
#2 0x7f15fc0fd8cf in __talloc ../../lib/talloc/talloc.c:825
#3 0x7f15fc0fd8cf in _talloc_named_const ../../lib/talloc/talloc.c:982
#4 0x7f15fc0fd8cf in _talloc_zero ../../lib/talloc/talloc.c:2421
#5 0x7f15fc93156e in rpc_worker_assoc_group_new ../../source3/rpc_server/rpc_worker.c:681
#6 0x7f15fc93156e in rpc_worker_assoc_group_find ../../source3/rpc_server/rpc_worker.c:730
#7 0x7f15fc120a18 in dcesrv_bind ../../librpc/rpc/dcesrv_core.c:1158
#8 0x7f15fc120a18 in dcesrv_process_ncacn_packet ../../librpc/rpc/dcesrv_core.c:2324
#9 0x7f15fc120a18 in dcesrv_loop_next_packet ../../librpc/rpc/dcesrv_core.c:3222
#10 0x7f15fc933722 in rpc_worker_new_client ../../source3/rpc_server/rpc_worker.c:489
#11 0x7f15fc933722 in rpc_worker_new_client_filter ../../source3/rpc_server/rpc_worker.c:558
#12 0x7f15fbef95ca in messaging_dispatch_waiters ../../source3/lib/messages.c:1343
#13 0x7f15fbefb589 in messaging_dispatch_rec ../../source3/lib/messages.c:1371
#14 0x7f15fbefb589 in messaging_recv_cb ../../source3/lib/messages.c:431
#15 0x7f15faddba9e in msg_dgm_ref_recv ../../lib/messaging/messages_dgm_ref.c:144
#16 0x7f15fadd6cc3 in messaging_dgm_recv ../../lib/messaging/messages_dgm.c:1426
#17 0x7f15fadd7618 in messaging_dgm_read_handler ../../lib/messaging/messages_dgm.c:1316
#18 0x7f15fb7d9ac1 in tevent_common_invoke_fd_handler ../../lib/tevent/tevent_fd.c:174
#19 0x7f15fb7ef185 in epoll_event_loop ../../lib/tevent/tevent_epoll.c:696
#20 0x7f15fb7ef185 in epoll_event_loop_once ../../lib/tevent/tevent_epoll.c:926
#21 0x7f15fb7e77b8 in std_event_loop_once ../../lib/tevent/tevent_standard.c:110
#22 0x7f15fb7d7549 in _tevent_loop_once ../../lib/tevent/tevent.c:820
#23 0x7f15fc936b7c in rpc_worker_main ../../source3/rpc_server/rpc_worker.c:1249
#24 0x5632ae1e1ec3 in main ../../source3/rpc_server/rpcd_lsad.c:132
#25 0x7f15f7c2a2ad in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
Björn Jacke [Tue, 15 Oct 2024 09:43:58 +0000 (11:43 +0200)]
samba-tool/backup: set the right permissions on our root dir
Since processes can run under the UID of the logged in user, it's required
to make sure that the users have the permissions here.
Signed-off-by: Bjoern Jacke <bjacke@samba.org> Reviewed-by: Björn Baumbach <bbaumbach@samba.org>
Autobuild-User(master): Björn Baumbach <bb@sernet.de>
Autobuild-Date(master): Tue Dec 10 11:40:27 UTC 2024 on atb-devel-224
docs-xml: Change 'DEBUGLEVEL' -> 'level' to match the option description
Signed-off-by: Pavel Filipenský <pfilipensky@samba.org> Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Autobuild-User(master): Pavel Filipensky <pfilipensky@samba.org>
Autobuild-Date(master): Fri Dec 6 13:33:38 UTC 2024 on atb-devel-224
s4:rpc_server/netlogon: fix error codes in dcesrv_netr_NetrLogonSendToSam
Signed-off-by: Stefan Metzmacher <metze@samba.org> Reviewed-by: Andreas Schneider <asn@samba.org>
Autobuild-User(master): Stefan Metzmacher <metze@samba.org>
Autobuild-Date(master): Thu Dec 5 17:46:49 UTC 2024 on atb-devel-224