I've written a review of Joe Carlsmith's report Is Power-Seeking AI an Existential Risk? I highly recommend the report and previous reviews for those interested in getting a better understanding of considerations around AI x-risk. I'll excerpt a few portions below. Thinking about reaching goal states rather than avoiding catastrophe