Assume that we have node indices $\{1,...,p\}$ and weights between the indices $w_{ji} = w_{ij}\geq 0$ for $i,j \in \{1,...,p\}$ with $w_{ii} = 0$. Then the Laplacian matrix is defined as $L:= \operatorname{diag}(\sum_{j}w_{1j},...,\sum_{j}w_{pj}) - W $, with $W$ being the matrix of the weights $w_{ij}$ and diag denoting the diagonal matrix with the corresponding entries. Now the Laplacian matrix can be seen as the discrete version of the Laplace operator for functions which is basically the sum of the second partial derivatives. I was wondering when considering a chain graph like $w_{12} = w_{23}=\dots =w_{(p-1)p} = 1$, then the Laplacian matrix applied to a vector $\vec{v}$, $L\vec{v}$, gives the entries $v_{i+1}-2v_{i}+v_{i-1}$ but also $v_{2}-v_{1}$ which seems to me is not in accordance with the analogy of the second derivative. Are there any other versions of the Laplacian for which this is taken care of? Or do we not care about this minor detail?

The appearance of $v_2-v_1$ can be understood as an analogue of the Neumann boundary condition. There are (at least) two ways to think about it.

Let us approximate the "Laplacian" (second derivative operator) in the interval $[0,1]$ using graph Laplacians on "chain graphs" with vertices $\tfrac in$, $i=0,1,\ldots,n$. Then graph Laplacian applied to a function $f$, normalised by a factor $n^{-2}$, converges to $f''$, except at the boundary. If we impose the same rate of convergence on the boundary, we necessarily get the Neumann boundary condition $f'(0) = 0$ and $f'(1) = 0$.

The usual Laplacian is a generator of the Brownian motion. If we restrict our attention to a domain, and we wish to include the boundary of that domain, we may consider different boundary conditions. Neumann one (which in the case of the interval reduces to $f'(0) = 0$ and $f'(1) = 0$) corresponds to a process reflected on the boundary. Similarly, the graph Laplacian is the generator of a simple random walk on the graph, and in the case of the "chain graph" this random walk is reflected on the boundary. This sort of explains the appearance of the "boundary derivative" $v_1 - v_2$.

That said, of course you can impose different boundary conditions on the "chain graph" by changing the matrix appropriately, and this is what people actually do when they study, for example, random walks killed or trapped on the boundary.