Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions ChangeLog
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
2.0.13

* optimize v2 request sizes
* fix socks5 issues
* fix issue in loading v2 resume data merkle trees

Expand Down
16 changes: 11 additions & 5 deletions include/libtorrent/torrent_handle.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -289,11 +289,13 @@ namespace aux {
// The overload taking a raw pointer to the data is a blocking call. It
// won't return until the libtorrent thread has copied the data into its
// disk write buffer. ``data`` is expected to point to a buffer of as
// many bytes as the size of the specified piece. See
// file_storage::piece_size().
// many bytes as the size of the specified piece.
// For v2 torrents, pieces at the end of files may not be full sized.
// For backwards compatibility, it's OK to pass a full sized piece as
// well.
//
// The data in the buffer is copied and passed on to the disk IO thread
// to be written at a later point.
// to be written at some later point in time.
//
// The overload taking a ``std::vector<char>`` is not blocking, it will
// send the buffer to the main thread and return immediately.
Expand All @@ -308,8 +310,12 @@ namespace aux {
// alert, read_piece_alert. Since this alert is a response to an explicit
// call, it will always be posted, regardless of the alert mask.
//
// Note that if you read multiple pieces, the read operations are not
// guaranteed to finish in the same order as you initiated them.
// .. note:: that if you read multiple pieces, the read operations are not
// guaranteed to finish in the same order as you initiated them.
//
// .. note:: the size of the buffer passed back in the alert is not
// necessarily piece_length() long. The last piece or pieces at the end
// of files (in v2 and hybrid torrents) are not full size.
void read_piece(piece_index_t piece) const;

// Returns true if this piece has been completely downloaded and written
Expand Down
10 changes: 10 additions & 0 deletions include/libtorrent/torrent_info.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -505,6 +505,16 @@ TORRENT_VERSION_NAMESPACE_3
// except for the last piece, which may be shorter.
int piece_size(piece_index_t index) const { return m_files.piece_size(index); }

// returns the piece size appropriate for computing request lengths.
// for v2 torrents, pieces at the end of files may be shorter than
// the main piece size. This is the case for hybrid torrents as well.
int piece_size_for_req(piece_index_t index) const
{
return v2()
? m_files.piece_size2(index)
: m_files.piece_size(index);
}
Comment thread
arvidn marked this conversation as resolved.
Comment on lines +508 to +516
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR introduces torrent_info::piece_size_for_req() as a new public helper with v2-specific semantics. There are already unit tests for file_storage::piece_size2(), but I couldn't find any tests that exercise piece_size_for_req() directly (including hybrid torrents) to ensure it keeps request sizing correct at file boundaries. Please add coverage to lock in the intended behavior (v2()/hybrid -> piece_size2, v1-only -> piece_size).

Copilot uses AI. Check for mistakes.

// ``hash_for_piece()`` takes a piece-index and returns the 20-bytes
// sha1-hash for that piece and ``info_hash()`` returns the 20-bytes
// sha1-hash for the info-section of the torrent file.
Expand Down
18 changes: 1 addition & 17 deletions src/bt_peer_connection.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -879,23 +879,7 @@ namespace {
TORRENT_ASSERT(t);
auto const dlq = download_queue();
for (pending_block const& pb : dlq)
{
peer_request r;
r.piece = pb.block.piece_index;
r.start = pb.block.block_index * t->block_size();
r.length = t->block_size();
// if it's the last piece, make sure to
// set the length of the request to not
// exceed the end of the torrent. This is
// necessary in order to maintain a correct
// m_outstanding_bytes
if (r.piece == t->torrent_file().last_piece())
{
r.length = std::min(t->torrent_file().piece_size(
r.piece) - r.start, r.length);
}
incoming_reject_request(r);
}
incoming_reject_request(t->to_req(pb.block));
}
}

Expand Down
6 changes: 1 addition & 5 deletions src/http_seed_connection.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -150,11 +150,7 @@ namespace libtorrent {
// would otherwise point to one past the end
int const correction = ret.bytes_downloaded ? -1 : 0;
ret.block_index = (pr.start + ret.bytes_downloaded + correction) / t->block_size();
ret.full_block_bytes = t->block_size();
piece_index_t const last_piece = t->torrent_file().last_piece();
if (ret.piece_index == last_piece && ret.block_index
== t->torrent_file().piece_size(last_piece) / t->block_size())
ret.full_block_bytes = t->torrent_file().piece_size(last_piece) % t->block_size();
ret.full_block_bytes = std::min(t->block_size(), t->torrent_file().piece_size_for_req(ret.piece_index) - pr.start);
return ret;
}

Expand Down
75 changes: 19 additions & 56 deletions src/peer_connection.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1271,7 +1271,7 @@ namespace libtorrent {
return p.piece >= piece_index_t(0)
&& p.piece < ti.end_piece()
&& p.start >= 0
&& p.start < ti.piece_length()
&& p.start < ti.piece_size_for_req(p.piece)
&& t->to_req(piece_block(p.piece, p.start / t->block_size())) == p;
}

Expand Down Expand Up @@ -1599,9 +1599,9 @@ namespace libtorrent {
if (r.piece < piece_index_t{}
|| r.piece >= t->torrent_file().files().end_piece()
|| r.start < 0
|| r.start >= t->torrent_file().piece_length()
|| r.start >= t->torrent_file().piece_size_for_req(r.piece)
|| (r.start % block_size) != 0
|| r.length != std::min(t->torrent_file().piece_size(r.piece) - r.start, block_size))
|| r.length != std::min(t->torrent_file().piece_size_for_req(r.piece) - r.start, block_size))
{
#ifndef TORRENT_DISABLE_LOGGING
peer_log(peer_log_alert::info, "REJECT_PIECE", "invalid reject message (%d, %d, %d)"
Expand Down Expand Up @@ -3717,25 +3717,15 @@ namespace libtorrent {
{
piece_block const b = pb.block;

int const block_offset = b.block_index * t->block_size();
int const block_size
= std::min(t->torrent_file().piece_size(b.piece_index)-block_offset,
t->block_size());
TORRENT_ASSERT(block_size > 0);
TORRENT_ASSERT(block_size <= t->block_size());

// we can't cancel the piece if we've started receiving it
if (m_receiving_block == b) continue;

peer_request r;
r.piece = b.piece_index;
r.start = block_offset;
r.length = block_size;
peer_request const r = t->to_req(b);

#ifndef TORRENT_DISABLE_LOGGING
peer_log(peer_log_alert::outgoing_message, "CANCEL"
, "piece: %d s: %d l: %d b: %d"
, static_cast<int>(b.piece_index), block_offset, block_size, b.block_index);
, static_cast<int>(r.piece), r.start, r.length, b.block_index);
#endif
write_cancel(r);
}
Expand Down Expand Up @@ -3784,28 +3774,18 @@ namespace libtorrent {
return;
}

int const block_offset = block.block_index * t->block_size();
int const block_size
= std::min(t->torrent_file().piece_size(block.piece_index) - block_offset,
t->block_size());
TORRENT_ASSERT(block_size > 0);
TORRENT_ASSERT(block_size <= t->block_size());

it->not_wanted = true;

if (force) t->picker().abort_download(block, peer_info_struct());

if (m_outstanding_bytes < block_size) return;
peer_request const r = t->to_req(block);

peer_request r;
r.piece = block.piece_index;
r.start = block_offset;
r.length = block_size;
if (m_outstanding_bytes < r.length) return;

#ifndef TORRENT_DISABLE_LOGGING
peer_log(peer_log_alert::outgoing_message, "CANCEL"
, "piece: %d s: %d l: %d b: %d"
, static_cast<int>(block.piece_index), block_offset, block_size, block.block_index);
, static_cast<int>(r.piece), r.start, r.length, block.block_index);
#endif
write_cancel(r);
}
Expand Down Expand Up @@ -4081,24 +4061,14 @@ namespace libtorrent {
continue;
}

int block_offset = block.block.block_index * t->block_size();
int bs = std::min(t->torrent_file().piece_size(
block.block.piece_index) - block_offset, t->block_size());
TORRENT_ASSERT(bs > 0);
TORRENT_ASSERT(bs <= t->block_size());

peer_request r;
r.piece = block.block.piece_index;
r.start = block_offset;
r.length = bs;

peer_request r = t->to_req(block.block);
if (m_download_queue.empty())
m_counters.inc_stats_counter(counters::num_peers_down_requests);

TORRENT_ASSERT(validate_piece_request(t->to_req(block.block)));
block.send_buffer_offset = aux::numeric_cast<std::uint32_t>(m_send_buffer.size());
m_download_queue.push_back(block);
m_outstanding_bytes += bs;
m_outstanding_bytes += r.length;
#if TORRENT_USE_INVARIANT_CHECKS
check_invariant();
#endif
Expand Down Expand Up @@ -4128,9 +4098,10 @@ namespace libtorrent {
m_download_queue.push_back(block);
if (m_queued_time_critical) --m_queued_time_critical;

block_offset = block.block.block_index * t->block_size();
bs = std::min(t->torrent_file().piece_size(
block.block.piece_index) - block_offset, t->block_size());
int const block_offset = block.block.block_index * t->block_size();
int const bs =
std::min(t->torrent_file().piece_size_for_req(block.block.piece_index)
- block_offset, t->block_size());
TORRENT_ASSERT(bs > 0);
TORRENT_ASSERT(bs <= t->block_size());

Expand Down Expand Up @@ -6607,28 +6578,20 @@ namespace libtorrent {
// if the piece is fully downloaded, we might have popped it from the
// download queue already
int outstanding_bytes = 0;
// bool in_download_queue = false;
int const bs = t->block_size();
piece_block last_block(ti.last_piece()
, (ti.piece_size(ti.last_piece()) + bs - 1) / bs);
, (ti.piece_size_for_req(ti.last_piece()) + bs - 1) / bs);

for (std::vector<pending_block>::const_iterator i = m_download_queue.begin()
, end(m_download_queue.end()); i != end; ++i)
{
TORRENT_ASSERT(i->block.piece_index <= last_block.piece_index);
TORRENT_ASSERT(i->block.piece_index < last_block.piece_index
|| i->block.block_index <= last_block.block_index);

outstanding_bytes += t->to_req(i->block).length;
if (m_received_in_piece && i == m_download_queue.begin())
{
// in_download_queue = true;
// this assert is not correct since block may have different sizes
// and may not be returned in the order they were requested
// TORRENT_ASSERT(t->to_req(i->block).length >= m_received_in_piece);
outstanding_bytes += t->to_req(i->block).length - m_received_in_piece;
}
else
{
outstanding_bytes += t->to_req(i->block).length;
}
outstanding_bytes -= m_received_in_piece;
}
//if (p && p->bytes_downloaded < p->full_block_bytes) TORRENT_ASSERT(in_download_queue);

Expand Down
23 changes: 13 additions & 10 deletions src/torrent.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -807,7 +807,7 @@ aux::vector<download_priority_t, piece_index_t> file_to_piece_prio(
return;
}

const int piece_size = m_torrent_file->piece_size(piece);
const int piece_size = m_torrent_file->piece_size_for_req(piece);
Comment thread
arvidn marked this conversation as resolved.
const int blocks_in_piece = (piece_size + block_size() - 1) / block_size();

TORRENT_ASSERT(blocks_in_piece > 0);
Expand Down Expand Up @@ -1254,7 +1254,7 @@ aux::vector<download_priority_t, piece_index_t> file_to_piece_prio(

if (rp->blocks_left == 0)
{
int size = m_torrent_file->piece_size(r.piece);
int size = m_torrent_file->piece_size_for_req(r.piece);
if (rp->fail)
{
m_ses.alerts().emplace_alert<read_piece_alert>(
Expand Down Expand Up @@ -1376,10 +1376,13 @@ aux::vector<download_priority_t, piece_index_t> file_to_piece_prio(
return;

// make sure the piece size is correct
if (data.size() != std::size_t(m_torrent_file->piece_size(piece)))
return;

add_piece(piece, data.data(), flags);
// we check against the v1 piece size as well, for backwards compatibility
if (data.size() == std::size_t(m_torrent_file->piece_size_for_req(piece))
|| data.size() == std::size_t(m_torrent_file->piece_size(piece)))
{
data.resize(std::size_t(m_torrent_file->piece_size_for_req(piece)));
add_piece(piece, data.data(), flags);
}
}

// TODO: 3 there's some duplication between this function and
Expand All @@ -1393,7 +1396,7 @@ aux::vector<download_priority_t, piece_index_t> file_to_piece_prio(
if (piece >= torrent_file().end_piece())
return;

int const piece_size = m_torrent_file->piece_size(piece);
int const piece_size = m_torrent_file->piece_size_for_req(piece);
int const blocks_in_piece = (piece_size + block_size() - 1) / block_size();

if (m_deleted) return;
Expand Down Expand Up @@ -1519,8 +1522,8 @@ aux::vector<download_priority_t, piece_index_t> file_to_piece_prio(
peer_request torrent::to_req(piece_block const& p) const
{
int const block_offset = p.block_index * block_size();
int const block = std::min(torrent_file().piece_size(
p.piece_index) - block_offset, block_size());
int const piece_sz = torrent_file().piece_size_for_req(p.piece_index);
int const block = std::min(piece_sz - block_offset, block_size());
TORRENT_ASSERT(block > 0);
Comment on lines 1522 to 1527
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torrent::to_req() now uses piece_size_for_req() to size the last block at v2 file boundaries. There don't appear to be any tests that assert the generated peer_request lengths for v2-only/hybrid torrents (i.e. that requests never extend into pad-space). Adding a focused unit/integration test for this would help prevent regressions since this affects core wire-protocol behavior.

Copilot uses AI. Check for mistakes.
TORRENT_ASSERT(block <= block_size());

Expand Down Expand Up @@ -7391,7 +7394,7 @@ namespace {
TORRENT_ASSERT(counter * blocks_per_piece + pi.blocks_in_piece <= int(blk.size()));
block_info* blocks = &blk[std::size_t(counter * blocks_per_piece)];
pi.blocks = blocks;
int const piece_size = ti.piece_size(i->index);
int const piece_size = ti.piece_size_for_req(i->index);
int idx = -1;
for (auto const& info : p.blocks_for_piece(*i))
{
Expand Down
Loading
Loading